Test Report: KVM_Linux_crio 17243

                    
                      a4c3e20099a4bdf499fee0d2faaf79bc020e16c9:2023-09-14:31017
                    
                

Test fail (27/290)

Order failed test Duration
25 TestAddons/parallel/Ingress 155.79
36 TestAddons/StoppedEnableDisable 155.44
75 TestFunctional/serial/LogsFileCmd 1.27
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 175.5
200 TestMultiNode/serial/PingHostFrom2Pods 3.01
206 TestMultiNode/serial/RestartKeepsNodes 685.85
208 TestMultiNode/serial/StopMultiNode 142.93
215 TestPreload 299.68
221 TestRunningBinaryUpgrade 171.22
238 TestPause/serial/SecondStartNoReconfiguration 54.92
257 TestStoppedBinaryUpgrade/Upgrade 257.5
264 TestStartStop/group/no-preload/serial/Stop 139.81
269 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.72
273 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
277 TestStartStop/group/embed-certs/serial/Stop 140.19
278 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
282 TestStartStop/group/old-k8s-version/serial/Stop 139.97
283 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
287 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.13
288 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.02
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.05
290 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.09
291 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 525.53
292 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 542.78
293 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 329.77
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 219.04
x
+
TestAddons/parallel/Ingress (155.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-452179 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-452179 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:208: (dbg) Done: kubectl --context addons-452179 replace --force -f testdata/nginx-ingress-v1.yaml: (1.238121354s)
addons_test.go:221: (dbg) Run:  kubectl --context addons-452179 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1920c797-5282-4538-b571-a26c3d4d1b76] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1920c797-5282-4538-b571-a26c3d4d1b76] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.055405755s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-452179 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.899643674s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-452179 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.45
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-452179 addons disable ingress-dns --alsologtostderr -v=1: (1.556466481s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-452179 addons disable ingress --alsologtostderr -v=1: (7.666139431s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-452179 -n addons-452179
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-452179 logs -n 25: (1.057278283s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-560258 | jenkins | v1.31.2 | 14 Sep 23 21:36 UTC |                     |
	|         | -p download-only-560258        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-560258 | jenkins | v1.31.2 | 14 Sep 23 21:36 UTC |                     |
	|         | -p download-only-560258        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 14 Sep 23 21:37 UTC | 14 Sep 23 21:37 UTC |
	| delete  | -p download-only-560258        | download-only-560258 | jenkins | v1.31.2 | 14 Sep 23 21:37 UTC | 14 Sep 23 21:37 UTC |
	| delete  | -p download-only-560258        | download-only-560258 | jenkins | v1.31.2 | 14 Sep 23 21:37 UTC | 14 Sep 23 21:37 UTC |
	| start   | --download-only -p             | binary-mirror-674696 | jenkins | v1.31.2 | 14 Sep 23 21:37 UTC |                     |
	|         | binary-mirror-674696           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38139         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-674696        | binary-mirror-674696 | jenkins | v1.31.2 | 14 Sep 23 21:37 UTC | 14 Sep 23 21:37 UTC |
	| start   | -p addons-452179               | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:37 UTC | 14 Sep 23 21:39 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:39 UTC | 14 Sep 23 21:39 UTC |
	|         | -p addons-452179               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:39 UTC | 14 Sep 23 21:39 UTC |
	|         | addons-452179                  |                      |         |         |                     |                     |
	| addons  | addons-452179 addons           | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:39 UTC | 14 Sep 23 21:39 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:39 UTC | 14 Sep 23 21:39 UTC |
	|         | addons-452179                  |                      |         |         |                     |                     |
	| ip      | addons-452179 ip               | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:39 UTC | 14 Sep 23 21:39 UTC |
	| addons  | addons-452179 addons disable   | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:39 UTC | 14 Sep 23 21:39 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-452179 addons disable   | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:39 UTC | 14 Sep 23 21:39 UTC |
	|         | helm-tiller --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| ssh     | addons-452179 ssh curl -s      | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:40 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                      |         |         |                     |                     |
	|         | nginx.example.com'             |                      |         |         |                     |                     |
	| addons  | addons-452179 addons           | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:40 UTC | 14 Sep 23 21:41 UTC |
	|         | disable csi-hostpath-driver    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-452179 addons           | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:41 UTC | 14 Sep 23 21:41 UTC |
	|         | disable volumesnapshots        |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-452179 ip               | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:42 UTC | 14 Sep 23 21:42 UTC |
	| addons  | addons-452179 addons disable   | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:42 UTC | 14 Sep 23 21:42 UTC |
	|         | ingress-dns --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-452179 addons disable   | addons-452179        | jenkins | v1.31.2 | 14 Sep 23 21:42 UTC | 14 Sep 23 21:42 UTC |
	|         | ingress --alsologtostderr -v=1 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 21:37:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 21:37:04.134991   13879 out.go:296] Setting OutFile to fd 1 ...
	I0914 21:37:04.135112   13879 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:37:04.135121   13879 out.go:309] Setting ErrFile to fd 2...
	I0914 21:37:04.135132   13879 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:37:04.135294   13879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 21:37:04.135908   13879 out.go:303] Setting JSON to false
	I0914 21:37:04.136675   13879 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1166,"bootTime":1694726258,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 21:37:04.136733   13879 start.go:138] virtualization: kvm guest
	I0914 21:37:04.138964   13879 out.go:177] * [addons-452179] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 21:37:04.140557   13879 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 21:37:04.141921   13879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 21:37:04.140596   13879 notify.go:220] Checking for updates...
	I0914 21:37:04.144654   13879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:37:04.146134   13879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:37:04.147554   13879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 21:37:04.148949   13879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 21:37:04.150416   13879 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 21:37:04.182205   13879 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 21:37:04.183534   13879 start.go:298] selected driver: kvm2
	I0914 21:37:04.183547   13879 start.go:902] validating driver "kvm2" against <nil>
	I0914 21:37:04.183557   13879 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 21:37:04.184170   13879 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 21:37:04.184237   13879 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 21:37:04.197961   13879 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 21:37:04.198003   13879 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 21:37:04.198187   13879 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 21:37:04.198219   13879 cni.go:84] Creating CNI manager for ""
	I0914 21:37:04.198232   13879 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 21:37:04.198243   13879 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 21:37:04.198263   13879 start_flags.go:321] config:
	{Name:addons-452179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-452179 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:37:04.198401   13879 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 21:37:04.200214   13879 out.go:177] * Starting control plane node addons-452179 in cluster addons-452179
	I0914 21:37:04.201688   13879 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 21:37:04.201721   13879 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0914 21:37:04.201734   13879 cache.go:57] Caching tarball of preloaded images
	I0914 21:37:04.201817   13879 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 21:37:04.201829   13879 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 21:37:04.202124   13879 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/config.json ...
	I0914 21:37:04.202147   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/config.json: {Name:mkfa3cd5673c30b75e81dc2af861c971a94aec0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:04.202300   13879 start.go:365] acquiring machines lock for addons-452179: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 21:37:04.202358   13879 start.go:369] acquired machines lock for "addons-452179" in 41.635µs
	I0914 21:37:04.202381   13879 start.go:93] Provisioning new machine with config: &{Name:addons-452179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:addons-452179 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 21:37:04.202435   13879 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 21:37:04.204199   13879 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 21:37:04.204319   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:37:04.204361   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:37:04.217640   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I0914 21:37:04.218074   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:37:04.218597   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:37:04.218619   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:37:04.218931   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:37:04.219124   13879 main.go:141] libmachine: (addons-452179) Calling .GetMachineName
	I0914 21:37:04.219257   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:37:04.219406   13879 start.go:159] libmachine.API.Create for "addons-452179" (driver="kvm2")
	I0914 21:37:04.219434   13879 client.go:168] LocalClient.Create starting
	I0914 21:37:04.219478   13879 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem
	I0914 21:37:04.558920   13879 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem
	I0914 21:37:04.741635   13879 main.go:141] libmachine: Running pre-create checks...
	I0914 21:37:04.741660   13879 main.go:141] libmachine: (addons-452179) Calling .PreCreateCheck
	I0914 21:37:04.742140   13879 main.go:141] libmachine: (addons-452179) Calling .GetConfigRaw
	I0914 21:37:04.742579   13879 main.go:141] libmachine: Creating machine...
	I0914 21:37:04.742595   13879 main.go:141] libmachine: (addons-452179) Calling .Create
	I0914 21:37:04.742737   13879 main.go:141] libmachine: (addons-452179) Creating KVM machine...
	I0914 21:37:04.744027   13879 main.go:141] libmachine: (addons-452179) DBG | found existing default KVM network
	I0914 21:37:04.744705   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:04.744571   13901 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001478f0}
	I0914 21:37:04.750588   13879 main.go:141] libmachine: (addons-452179) DBG | trying to create private KVM network mk-addons-452179 192.168.39.0/24...
	I0914 21:37:04.815433   13879 main.go:141] libmachine: (addons-452179) DBG | private KVM network mk-addons-452179 192.168.39.0/24 created
	I0914 21:37:04.815455   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:04.815401   13901 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:37:04.815492   13879 main.go:141] libmachine: (addons-452179) Setting up store path in /home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179 ...
	I0914 21:37:04.815515   13879 main.go:141] libmachine: (addons-452179) Building disk image from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso
	I0914 21:37:04.815540   13879 main.go:141] libmachine: (addons-452179) Downloading /home/jenkins/minikube-integration/17243-6287/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso...
	I0914 21:37:05.034913   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:05.034794   13901 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa...
	I0914 21:37:05.200233   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:05.200108   13901 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/addons-452179.rawdisk...
	I0914 21:37:05.200267   13879 main.go:141] libmachine: (addons-452179) DBG | Writing magic tar header
	I0914 21:37:05.200278   13879 main.go:141] libmachine: (addons-452179) DBG | Writing SSH key tar header
	I0914 21:37:05.200286   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:05.200249   13901 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179 ...
	I0914 21:37:05.200437   13879 main.go:141] libmachine: (addons-452179) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179 (perms=drwx------)
	I0914 21:37:05.200472   13879 main.go:141] libmachine: (addons-452179) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179
	I0914 21:37:05.200482   13879 main.go:141] libmachine: (addons-452179) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines (perms=drwxr-xr-x)
	I0914 21:37:05.200492   13879 main.go:141] libmachine: (addons-452179) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube (perms=drwxr-xr-x)
	I0914 21:37:05.200499   13879 main.go:141] libmachine: (addons-452179) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287 (perms=drwxrwxr-x)
	I0914 21:37:05.200508   13879 main.go:141] libmachine: (addons-452179) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 21:37:05.200514   13879 main.go:141] libmachine: (addons-452179) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 21:37:05.200522   13879 main.go:141] libmachine: (addons-452179) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines
	I0914 21:37:05.200532   13879 main.go:141] libmachine: (addons-452179) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:37:05.200541   13879 main.go:141] libmachine: (addons-452179) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287
	I0914 21:37:05.200548   13879 main.go:141] libmachine: (addons-452179) Creating domain...
	I0914 21:37:05.200557   13879 main.go:141] libmachine: (addons-452179) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 21:37:05.200563   13879 main.go:141] libmachine: (addons-452179) DBG | Checking permissions on dir: /home/jenkins
	I0914 21:37:05.200572   13879 main.go:141] libmachine: (addons-452179) DBG | Checking permissions on dir: /home
	I0914 21:37:05.200579   13879 main.go:141] libmachine: (addons-452179) DBG | Skipping /home - not owner
	I0914 21:37:05.201504   13879 main.go:141] libmachine: (addons-452179) define libvirt domain using xml: 
	I0914 21:37:05.201516   13879 main.go:141] libmachine: (addons-452179) <domain type='kvm'>
	I0914 21:37:05.201524   13879 main.go:141] libmachine: (addons-452179)   <name>addons-452179</name>
	I0914 21:37:05.201543   13879 main.go:141] libmachine: (addons-452179)   <memory unit='MiB'>4000</memory>
	I0914 21:37:05.201559   13879 main.go:141] libmachine: (addons-452179)   <vcpu>2</vcpu>
	I0914 21:37:05.201568   13879 main.go:141] libmachine: (addons-452179)   <features>
	I0914 21:37:05.201578   13879 main.go:141] libmachine: (addons-452179)     <acpi/>
	I0914 21:37:05.201587   13879 main.go:141] libmachine: (addons-452179)     <apic/>
	I0914 21:37:05.201593   13879 main.go:141] libmachine: (addons-452179)     <pae/>
	I0914 21:37:05.201601   13879 main.go:141] libmachine: (addons-452179)     
	I0914 21:37:05.201607   13879 main.go:141] libmachine: (addons-452179)   </features>
	I0914 21:37:05.201615   13879 main.go:141] libmachine: (addons-452179)   <cpu mode='host-passthrough'>
	I0914 21:37:05.201622   13879 main.go:141] libmachine: (addons-452179)   
	I0914 21:37:05.201638   13879 main.go:141] libmachine: (addons-452179)   </cpu>
	I0914 21:37:05.201652   13879 main.go:141] libmachine: (addons-452179)   <os>
	I0914 21:37:05.201661   13879 main.go:141] libmachine: (addons-452179)     <type>hvm</type>
	I0914 21:37:05.201680   13879 main.go:141] libmachine: (addons-452179)     <boot dev='cdrom'/>
	I0914 21:37:05.201772   13879 main.go:141] libmachine: (addons-452179)     <boot dev='hd'/>
	I0914 21:37:05.201814   13879 main.go:141] libmachine: (addons-452179)     <bootmenu enable='no'/>
	I0914 21:37:05.201840   13879 main.go:141] libmachine: (addons-452179)   </os>
	I0914 21:37:05.201857   13879 main.go:141] libmachine: (addons-452179)   <devices>
	I0914 21:37:05.201874   13879 main.go:141] libmachine: (addons-452179)     <disk type='file' device='cdrom'>
	I0914 21:37:05.201899   13879 main.go:141] libmachine: (addons-452179)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/boot2docker.iso'/>
	I0914 21:37:05.201916   13879 main.go:141] libmachine: (addons-452179)       <target dev='hdc' bus='scsi'/>
	I0914 21:37:05.201931   13879 main.go:141] libmachine: (addons-452179)       <readonly/>
	I0914 21:37:05.201946   13879 main.go:141] libmachine: (addons-452179)     </disk>
	I0914 21:37:05.201962   13879 main.go:141] libmachine: (addons-452179)     <disk type='file' device='disk'>
	I0914 21:37:05.201981   13879 main.go:141] libmachine: (addons-452179)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 21:37:05.202001   13879 main.go:141] libmachine: (addons-452179)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/addons-452179.rawdisk'/>
	I0914 21:37:05.202017   13879 main.go:141] libmachine: (addons-452179)       <target dev='hda' bus='virtio'/>
	I0914 21:37:05.202035   13879 main.go:141] libmachine: (addons-452179)     </disk>
	I0914 21:37:05.202052   13879 main.go:141] libmachine: (addons-452179)     <interface type='network'>
	I0914 21:37:05.202068   13879 main.go:141] libmachine: (addons-452179)       <source network='mk-addons-452179'/>
	I0914 21:37:05.202084   13879 main.go:141] libmachine: (addons-452179)       <model type='virtio'/>
	I0914 21:37:05.202097   13879 main.go:141] libmachine: (addons-452179)     </interface>
	I0914 21:37:05.202124   13879 main.go:141] libmachine: (addons-452179)     <interface type='network'>
	I0914 21:37:05.202150   13879 main.go:141] libmachine: (addons-452179)       <source network='default'/>
	I0914 21:37:05.202162   13879 main.go:141] libmachine: (addons-452179)       <model type='virtio'/>
	I0914 21:37:05.202176   13879 main.go:141] libmachine: (addons-452179)     </interface>
	I0914 21:37:05.202192   13879 main.go:141] libmachine: (addons-452179)     <serial type='pty'>
	I0914 21:37:05.202206   13879 main.go:141] libmachine: (addons-452179)       <target port='0'/>
	I0914 21:37:05.202220   13879 main.go:141] libmachine: (addons-452179)     </serial>
	I0914 21:37:05.202234   13879 main.go:141] libmachine: (addons-452179)     <console type='pty'>
	I0914 21:37:05.202250   13879 main.go:141] libmachine: (addons-452179)       <target type='serial' port='0'/>
	I0914 21:37:05.202269   13879 main.go:141] libmachine: (addons-452179)     </console>
	I0914 21:37:05.202289   13879 main.go:141] libmachine: (addons-452179)     <rng model='virtio'>
	I0914 21:37:05.202309   13879 main.go:141] libmachine: (addons-452179)       <backend model='random'>/dev/random</backend>
	I0914 21:37:05.202409   13879 main.go:141] libmachine: (addons-452179)     </rng>
	I0914 21:37:05.202431   13879 main.go:141] libmachine: (addons-452179)     
	I0914 21:37:05.202447   13879 main.go:141] libmachine: (addons-452179)     
	I0914 21:37:05.202462   13879 main.go:141] libmachine: (addons-452179)   </devices>
	I0914 21:37:05.202473   13879 main.go:141] libmachine: (addons-452179) </domain>
	I0914 21:37:05.202489   13879 main.go:141] libmachine: (addons-452179) 
	I0914 21:37:05.207503   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:57:85:97 in network default
	I0914 21:37:05.208058   13879 main.go:141] libmachine: (addons-452179) Ensuring networks are active...
	I0914 21:37:05.208085   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:05.208681   13879 main.go:141] libmachine: (addons-452179) Ensuring network default is active
	I0914 21:37:05.208940   13879 main.go:141] libmachine: (addons-452179) Ensuring network mk-addons-452179 is active
	I0914 21:37:05.209504   13879 main.go:141] libmachine: (addons-452179) Getting domain xml...
	I0914 21:37:05.210125   13879 main.go:141] libmachine: (addons-452179) Creating domain...
	I0914 21:37:06.624837   13879 main.go:141] libmachine: (addons-452179) Waiting to get IP...
	I0914 21:37:06.625571   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:06.626099   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:06.626159   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:06.626065   13901 retry.go:31] will retry after 310.084511ms: waiting for machine to come up
	I0914 21:37:06.937766   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:06.938246   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:06.938277   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:06.938181   13901 retry.go:31] will retry after 293.403488ms: waiting for machine to come up
	I0914 21:37:07.233698   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:07.234146   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:07.234191   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:07.234093   13901 retry.go:31] will retry after 478.756974ms: waiting for machine to come up
	I0914 21:37:07.714894   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:07.715335   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:07.715365   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:07.715290   13901 retry.go:31] will retry after 394.323296ms: waiting for machine to come up
	I0914 21:37:08.110984   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:08.111450   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:08.111515   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:08.111421   13901 retry.go:31] will retry after 717.032059ms: waiting for machine to come up
	I0914 21:37:08.830330   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:08.830740   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:08.830768   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:08.830688   13901 retry.go:31] will retry after 575.807555ms: waiting for machine to come up
	I0914 21:37:09.408123   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:09.408568   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:09.408601   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:09.408524   13901 retry.go:31] will retry after 818.066391ms: waiting for machine to come up
	I0914 21:37:10.228618   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:10.228972   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:10.228999   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:10.228933   13901 retry.go:31] will retry after 1.264482376s: waiting for machine to come up
	I0914 21:37:11.495381   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:11.495892   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:11.495916   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:11.495785   13901 retry.go:31] will retry after 1.554032785s: waiting for machine to come up
	I0914 21:37:13.051648   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:13.052078   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:13.052099   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:13.052053   13901 retry.go:31] will retry after 1.451283106s: waiting for machine to come up
	I0914 21:37:14.504430   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:14.504869   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:14.504899   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:14.504797   13901 retry.go:31] will retry after 2.41891722s: waiting for machine to come up
	I0914 21:37:16.926595   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:16.927101   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:16.927128   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:16.927060   13901 retry.go:31] will retry after 3.536252186s: waiting for machine to come up
	I0914 21:37:20.464616   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:20.465069   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:20.465092   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:20.465027   13901 retry.go:31] will retry after 4.535178838s: waiting for machine to come up
	I0914 21:37:25.001426   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:25.001769   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find current IP address of domain addons-452179 in network mk-addons-452179
	I0914 21:37:25.001794   13879 main.go:141] libmachine: (addons-452179) DBG | I0914 21:37:25.001715   13901 retry.go:31] will retry after 5.078122831s: waiting for machine to come up
	I0914 21:37:30.081430   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.081836   13879 main.go:141] libmachine: (addons-452179) Found IP for machine: 192.168.39.45
	I0914 21:37:30.081860   13879 main.go:141] libmachine: (addons-452179) Reserving static IP address...
	I0914 21:37:30.081881   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has current primary IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.082176   13879 main.go:141] libmachine: (addons-452179) DBG | unable to find host DHCP lease matching {name: "addons-452179", mac: "52:54:00:d4:c1:1e", ip: "192.168.39.45"} in network mk-addons-452179
	I0914 21:37:30.150788   13879 main.go:141] libmachine: (addons-452179) DBG | Getting to WaitForSSH function...
	I0914 21:37:30.150817   13879 main.go:141] libmachine: (addons-452179) Reserved static IP address: 192.168.39.45
	I0914 21:37:30.150830   13879 main.go:141] libmachine: (addons-452179) Waiting for SSH to be available...
	I0914 21:37:30.153141   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.153566   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:30.153605   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.153790   13879 main.go:141] libmachine: (addons-452179) DBG | Using SSH client type: external
	I0914 21:37:30.153818   13879 main.go:141] libmachine: (addons-452179) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa (-rw-------)
	I0914 21:37:30.153849   13879 main.go:141] libmachine: (addons-452179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 21:37:30.153860   13879 main.go:141] libmachine: (addons-452179) DBG | About to run SSH command:
	I0914 21:37:30.153867   13879 main.go:141] libmachine: (addons-452179) DBG | exit 0
	I0914 21:37:30.286766   13879 main.go:141] libmachine: (addons-452179) DBG | SSH cmd err, output: <nil>: 
	I0914 21:37:30.287027   13879 main.go:141] libmachine: (addons-452179) KVM machine creation complete!
	I0914 21:37:30.287309   13879 main.go:141] libmachine: (addons-452179) Calling .GetConfigRaw
	I0914 21:37:30.287841   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:37:30.288089   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:37:30.288285   13879 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 21:37:30.288299   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:37:30.289830   13879 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 21:37:30.289845   13879 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 21:37:30.289851   13879 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 21:37:30.289857   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:30.291912   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.292238   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:30.292271   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.292403   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:37:30.292569   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:30.292701   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:30.292833   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:37:30.292990   13879 main.go:141] libmachine: Using SSH client type: native
	I0914 21:37:30.293401   13879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0914 21:37:30.293417   13879 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 21:37:30.398601   13879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 21:37:30.398633   13879 main.go:141] libmachine: Detecting the provisioner...
	I0914 21:37:30.398641   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:30.401391   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.401747   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:30.401786   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.401938   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:37:30.402132   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:30.402279   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:30.402393   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:37:30.402543   13879 main.go:141] libmachine: Using SSH client type: native
	I0914 21:37:30.402841   13879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0914 21:37:30.402852   13879 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 21:37:30.507732   13879 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g52d8811-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0914 21:37:30.507797   13879 main.go:141] libmachine: found compatible host: buildroot
	I0914 21:37:30.507804   13879 main.go:141] libmachine: Provisioning with buildroot...
	I0914 21:37:30.507812   13879 main.go:141] libmachine: (addons-452179) Calling .GetMachineName
	I0914 21:37:30.508047   13879 buildroot.go:166] provisioning hostname "addons-452179"
	I0914 21:37:30.508070   13879 main.go:141] libmachine: (addons-452179) Calling .GetMachineName
	I0914 21:37:30.508212   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:30.510642   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.510969   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:30.511001   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.511108   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:37:30.511287   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:30.511442   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:30.511608   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:37:30.511769   13879 main.go:141] libmachine: Using SSH client type: native
	I0914 21:37:30.512125   13879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0914 21:37:30.512140   13879 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-452179 && echo "addons-452179" | sudo tee /etc/hostname
	I0914 21:37:30.626471   13879 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-452179
	
	I0914 21:37:30.626517   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:30.629526   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.629924   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:30.629955   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.630122   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:37:30.630328   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:30.630505   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:30.630653   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:37:30.630835   13879 main.go:141] libmachine: Using SSH client type: native
	I0914 21:37:30.631128   13879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0914 21:37:30.631144   13879 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-452179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-452179/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-452179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 21:37:30.742099   13879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 21:37:30.742133   13879 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 21:37:30.742166   13879 buildroot.go:174] setting up certificates
	I0914 21:37:30.742178   13879 provision.go:83] configureAuth start
	I0914 21:37:30.742193   13879 main.go:141] libmachine: (addons-452179) Calling .GetMachineName
	I0914 21:37:30.742506   13879 main.go:141] libmachine: (addons-452179) Calling .GetIP
	I0914 21:37:30.745184   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.745625   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:30.745658   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.745815   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:30.747885   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.748210   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:30.748233   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.748390   13879 provision.go:138] copyHostCerts
	I0914 21:37:30.748453   13879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 21:37:30.748555   13879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 21:37:30.748609   13879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 21:37:30.748652   13879 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.addons-452179 san=[192.168.39.45 192.168.39.45 localhost 127.0.0.1 minikube addons-452179]
	I0914 21:37:30.872761   13879 provision.go:172] copyRemoteCerts
	I0914 21:37:30.872815   13879 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 21:37:30.872836   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:30.875540   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.875970   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:30.875992   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:30.876192   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:37:30.876373   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:30.876561   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:37:30.876690   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:37:30.956092   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 21:37:30.976219   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 21:37:30.995874   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 21:37:31.014801   13879 provision.go:86] duration metric: configureAuth took 272.613117ms
	I0914 21:37:31.014823   13879 buildroot.go:189] setting minikube options for container-runtime
	I0914 21:37:31.015036   13879 config.go:182] Loaded profile config "addons-452179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 21:37:31.015123   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:31.017415   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.017753   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:31.018117   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.018122   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:37:31.019005   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:31.019328   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:31.019482   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:37:31.019643   13879 main.go:141] libmachine: Using SSH client type: native
	I0914 21:37:31.019972   13879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0914 21:37:31.019995   13879 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 21:37:31.308785   13879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 21:37:31.308813   13879 main.go:141] libmachine: Checking connection to Docker...
	I0914 21:37:31.308863   13879 main.go:141] libmachine: (addons-452179) Calling .GetURL
	I0914 21:37:31.310138   13879 main.go:141] libmachine: (addons-452179) DBG | Using libvirt version 6000000
	I0914 21:37:31.312574   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.312876   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:31.312905   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.313059   13879 main.go:141] libmachine: Docker is up and running!
	I0914 21:37:31.313080   13879 main.go:141] libmachine: Reticulating splines...
	I0914 21:37:31.313086   13879 client.go:171] LocalClient.Create took 27.09364582s
	I0914 21:37:31.313106   13879 start.go:167] duration metric: libmachine.API.Create for "addons-452179" took 27.093703071s
	I0914 21:37:31.313122   13879 start.go:300] post-start starting for "addons-452179" (driver="kvm2")
	I0914 21:37:31.313134   13879 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 21:37:31.313153   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:37:31.313392   13879 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 21:37:31.313416   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:31.315535   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.315934   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:31.315972   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.316102   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:37:31.316299   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:31.316459   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:37:31.316610   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:37:31.395809   13879 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 21:37:31.399603   13879 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 21:37:31.399627   13879 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 21:37:31.399694   13879 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 21:37:31.399742   13879 start.go:303] post-start completed in 86.61197ms
	I0914 21:37:31.399783   13879 main.go:141] libmachine: (addons-452179) Calling .GetConfigRaw
	I0914 21:37:31.400284   13879 main.go:141] libmachine: (addons-452179) Calling .GetIP
	I0914 21:37:31.402747   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.403096   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:31.403139   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.403279   13879 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/config.json ...
	I0914 21:37:31.403569   13879 start.go:128] duration metric: createHost completed in 27.201123396s
	I0914 21:37:31.403599   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:31.405388   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.405666   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:31.405696   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.405867   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:37:31.406038   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:31.406230   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:31.406344   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:37:31.406508   13879 main.go:141] libmachine: Using SSH client type: native
	I0914 21:37:31.406945   13879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0914 21:37:31.406961   13879 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 21:37:31.511643   13879 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694727451.486573319
	
	I0914 21:37:31.511662   13879 fix.go:206] guest clock: 1694727451.486573319
	I0914 21:37:31.511669   13879 fix.go:219] Guest: 2023-09-14 21:37:31.486573319 +0000 UTC Remote: 2023-09-14 21:37:31.403584852 +0000 UTC m=+27.299150627 (delta=82.988467ms)
	I0914 21:37:31.511686   13879 fix.go:190] guest clock delta is within tolerance: 82.988467ms
	I0914 21:37:31.511690   13879 start.go:83] releasing machines lock for "addons-452179", held for 27.309321061s
	I0914 21:37:31.511727   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:37:31.511962   13879 main.go:141] libmachine: (addons-452179) Calling .GetIP
	I0914 21:37:31.514725   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.515055   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:31.515078   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.515269   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:37:31.515792   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:37:31.515988   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:37:31.516088   13879 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 21:37:31.516130   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:31.516218   13879 ssh_runner.go:195] Run: cat /version.json
	I0914 21:37:31.516243   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:37:31.518840   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.519164   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:31.519193   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.519217   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.519341   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:37:31.519528   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:31.519694   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:37:31.519701   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:31.519770   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:31.519845   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:37:31.519849   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:37:31.520015   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:37:31.520159   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:37:31.520292   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:37:31.595521   13879 ssh_runner.go:195] Run: systemctl --version
	I0914 21:37:31.631791   13879 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 21:37:31.782600   13879 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 21:37:31.787870   13879 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 21:37:31.787932   13879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 21:37:31.800197   13879 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 21:37:31.800220   13879 start.go:469] detecting cgroup driver to use...
	I0914 21:37:31.800269   13879 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 21:37:31.813653   13879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 21:37:31.825304   13879 docker.go:196] disabling cri-docker service (if available) ...
	I0914 21:37:31.825354   13879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 21:37:31.837530   13879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 21:37:31.849312   13879 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 21:37:31.957832   13879 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 21:37:32.072953   13879 docker.go:212] disabling docker service ...
	I0914 21:37:32.073015   13879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 21:37:32.085711   13879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 21:37:32.096551   13879 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 21:37:32.192674   13879 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 21:37:32.287215   13879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 21:37:32.298334   13879 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 21:37:32.313554   13879 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 21:37:32.313610   13879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:37:32.321655   13879 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 21:37:32.321718   13879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:37:32.329642   13879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:37:32.337889   13879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:37:32.346037   13879 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 21:37:32.359333   13879 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 21:37:32.366581   13879 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 21:37:32.366634   13879 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 21:37:32.377691   13879 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 21:37:32.384754   13879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 21:37:32.490612   13879 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 21:37:32.642376   13879 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 21:37:32.642455   13879 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 21:37:32.647564   13879 start.go:537] Will wait 60s for crictl version
	I0914 21:37:32.647615   13879 ssh_runner.go:195] Run: which crictl
	I0914 21:37:32.650871   13879 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 21:37:32.677380   13879 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 21:37:32.677473   13879 ssh_runner.go:195] Run: crio --version
	I0914 21:37:32.718959   13879 ssh_runner.go:195] Run: crio --version
	I0914 21:37:32.763497   13879 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 21:37:32.765031   13879 main.go:141] libmachine: (addons-452179) Calling .GetIP
	I0914 21:37:32.768066   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:32.768392   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:37:32.768414   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:37:32.768619   13879 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 21:37:32.772343   13879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 21:37:32.782711   13879 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 21:37:32.782759   13879 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 21:37:32.811813   13879 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 21:37:32.811877   13879 ssh_runner.go:195] Run: which lz4
	I0914 21:37:32.815356   13879 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 21:37:32.819088   13879 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 21:37:32.819117   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 21:37:34.405532   13879 crio.go:444] Took 1.590218 seconds to copy over tarball
	I0914 21:37:34.405591   13879 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 21:37:37.289951   13879 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.884338552s)
	I0914 21:37:37.289973   13879 crio.go:451] Took 2.884416 seconds to extract the tarball
	I0914 21:37:37.289982   13879 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 21:37:37.332288   13879 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 21:37:37.383290   13879 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 21:37:37.383311   13879 cache_images.go:84] Images are preloaded, skipping loading
	I0914 21:37:37.383384   13879 ssh_runner.go:195] Run: crio config
	I0914 21:37:37.436015   13879 cni.go:84] Creating CNI manager for ""
	I0914 21:37:37.436036   13879 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 21:37:37.436053   13879 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 21:37:37.436070   13879 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.45 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-452179 NodeName:addons-452179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 21:37:37.436182   13879 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-452179"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 21:37:37.436247   13879 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-452179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-452179 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 21:37:37.436292   13879 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 21:37:37.444913   13879 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 21:37:37.444979   13879 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 21:37:37.453337   13879 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0914 21:37:37.467190   13879 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 21:37:37.481061   13879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0914 21:37:37.494791   13879 ssh_runner.go:195] Run: grep 192.168.39.45	control-plane.minikube.internal$ /etc/hosts
	I0914 21:37:37.498226   13879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 21:37:37.508192   13879 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179 for IP: 192.168.39.45
	I0914 21:37:37.508225   13879 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:37.508364   13879 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 21:37:37.670881   13879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt ...
	I0914 21:37:37.670910   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt: {Name:mk11721075a92f0d5aa8a53143cab88c9c2e02c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:37.671052   13879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key ...
	I0914 21:37:37.671062   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key: {Name:mk3cd6c472fe72f0a1dd5a6728178e6254259cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:37.671131   13879 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 21:37:37.824638   13879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt ...
	I0914 21:37:37.824665   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt: {Name:mk9a0c478d2fe8621f19a4aa9d4c921705f9bbae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:37.824845   13879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key ...
	I0914 21:37:37.824861   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key: {Name:mkc09440ce82e994147d2d7df280527f42571bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:37.824985   13879 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.key
	I0914 21:37:37.825011   13879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt with IP's: []
	I0914 21:37:38.072319   13879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt ...
	I0914 21:37:38.072348   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: {Name:mkc56baf4cbb7fb0b30b7e9e8f6d4235e4dc7e7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:38.072521   13879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.key ...
	I0914 21:37:38.072535   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.key: {Name:mka95902bb50f949271f6f019b84e314615aacaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:38.072623   13879 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.key.7aba1c1f
	I0914 21:37:38.072645   13879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.crt.7aba1c1f with IP's: [192.168.39.45 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 21:37:38.144453   13879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.crt.7aba1c1f ...
	I0914 21:37:38.144482   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.crt.7aba1c1f: {Name:mk52ed3d963da15b47b77710177dffe5251e3edf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:38.144641   13879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.key.7aba1c1f ...
	I0914 21:37:38.144656   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.key.7aba1c1f: {Name:mk1c74dbb6f579f74b44ac4a836e63e27e056867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:38.144748   13879 certs.go:337] copying /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.crt.7aba1c1f -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.crt
	I0914 21:37:38.144830   13879 certs.go:341] copying /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.key.7aba1c1f -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.key
	I0914 21:37:38.144894   13879 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/proxy-client.key
	I0914 21:37:38.144916   13879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/proxy-client.crt with IP's: []
	I0914 21:37:38.255324   13879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/proxy-client.crt ...
	I0914 21:37:38.255353   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/proxy-client.crt: {Name:mkf17751e14b737e1b51a79598331d855d2bc66e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:38.255535   13879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/proxy-client.key ...
	I0914 21:37:38.255549   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/proxy-client.key: {Name:mk46227f102165f9329d0afe564de94dd9eb3d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:37:38.255731   13879 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 21:37:38.255787   13879 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 21:37:38.255828   13879 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 21:37:38.255864   13879 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 21:37:38.256404   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 21:37:38.278926   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 21:37:38.300218   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 21:37:38.320704   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 21:37:38.341788   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 21:37:38.362293   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 21:37:38.382885   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 21:37:38.405679   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 21:37:38.425719   13879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 21:37:38.446250   13879 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 21:37:38.460941   13879 ssh_runner.go:195] Run: openssl version
	I0914 21:37:38.465995   13879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 21:37:38.475637   13879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:37:38.480398   13879 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:37:38.480458   13879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:37:38.485876   13879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 21:37:38.496596   13879 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 21:37:38.500569   13879 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 21:37:38.500618   13879 kubeadm.go:404] StartCluster: {Name:addons-452179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:addons-452179 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:37:38.500703   13879 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 21:37:38.500771   13879 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 21:37:38.533404   13879 cri.go:89] found id: ""
	I0914 21:37:38.533470   13879 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 21:37:38.543491   13879 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 21:37:38.553251   13879 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 21:37:38.562947   13879 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 21:37:38.562992   13879 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 21:37:38.614943   13879 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 21:37:38.615027   13879 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 21:37:38.733172   13879 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 21:37:38.733313   13879 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 21:37:38.733552   13879 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 21:37:38.896149   13879 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 21:37:39.011190   13879 out.go:204]   - Generating certificates and keys ...
	I0914 21:37:39.011412   13879 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 21:37:39.011523   13879 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 21:37:39.364633   13879 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 21:37:39.445104   13879 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 21:37:39.499177   13879 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 21:37:39.640824   13879 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 21:37:40.170207   13879 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 21:37:40.170537   13879 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-452179 localhost] and IPs [192.168.39.45 127.0.0.1 ::1]
	I0914 21:37:40.343282   13879 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 21:37:40.343434   13879 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-452179 localhost] and IPs [192.168.39.45 127.0.0.1 ::1]
	I0914 21:37:40.638984   13879 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 21:37:40.812072   13879 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 21:37:41.000534   13879 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 21:37:41.000698   13879 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 21:37:41.181588   13879 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 21:37:41.286280   13879 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 21:37:41.414963   13879 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 21:37:41.580317   13879 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 21:37:41.581049   13879 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 21:37:41.583369   13879 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 21:37:41.585326   13879 out.go:204]   - Booting up control plane ...
	I0914 21:37:41.585441   13879 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 21:37:41.586651   13879 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 21:37:41.587529   13879 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 21:37:41.601721   13879 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 21:37:41.602249   13879 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 21:37:41.602351   13879 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 21:37:41.725419   13879 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 21:37:48.725296   13879 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003045 seconds
	I0914 21:37:48.725459   13879 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 21:37:48.747076   13879 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 21:37:49.285155   13879 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 21:37:49.285421   13879 kubeadm.go:322] [mark-control-plane] Marking the node addons-452179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 21:37:49.799399   13879 kubeadm.go:322] [bootstrap-token] Using token: kariau.i4f307etbydn3dme
	I0914 21:37:49.800824   13879 out.go:204]   - Configuring RBAC rules ...
	I0914 21:37:49.800984   13879 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 21:37:49.809907   13879 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 21:37:49.817890   13879 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 21:37:49.821833   13879 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 21:37:49.828078   13879 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 21:37:49.832045   13879 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 21:37:49.845884   13879 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 21:37:50.071495   13879 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 21:37:50.216876   13879 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 21:37:50.218410   13879 kubeadm.go:322] 
	I0914 21:37:50.218476   13879 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 21:37:50.218486   13879 kubeadm.go:322] 
	I0914 21:37:50.218567   13879 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 21:37:50.218575   13879 kubeadm.go:322] 
	I0914 21:37:50.218595   13879 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 21:37:50.218697   13879 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 21:37:50.218783   13879 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 21:37:50.218793   13879 kubeadm.go:322] 
	I0914 21:37:50.218868   13879 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 21:37:50.218883   13879 kubeadm.go:322] 
	I0914 21:37:50.218968   13879 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 21:37:50.218979   13879 kubeadm.go:322] 
	I0914 21:37:50.219058   13879 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 21:37:50.219157   13879 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 21:37:50.219223   13879 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 21:37:50.219229   13879 kubeadm.go:322] 
	I0914 21:37:50.219301   13879 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 21:37:50.219381   13879 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 21:37:50.219390   13879 kubeadm.go:322] 
	I0914 21:37:50.219497   13879 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kariau.i4f307etbydn3dme \
	I0914 21:37:50.219626   13879 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 21:37:50.219659   13879 kubeadm.go:322] 	--control-plane 
	I0914 21:37:50.219669   13879 kubeadm.go:322] 
	I0914 21:37:50.219791   13879 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 21:37:50.219802   13879 kubeadm.go:322] 
	I0914 21:37:50.219887   13879 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kariau.i4f307etbydn3dme \
	I0914 21:37:50.219998   13879 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 21:37:50.220653   13879 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 21:37:50.220674   13879 cni.go:84] Creating CNI manager for ""
	I0914 21:37:50.220684   13879 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 21:37:50.222514   13879 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 21:37:50.223883   13879 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 21:37:50.248576   13879 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 21:37:50.280208   13879 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 21:37:50.280301   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:50.280357   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=addons-452179 minikube.k8s.io/updated_at=2023_09_14T21_37_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:50.311788   13879 ops.go:34] apiserver oom_adj: -16
	I0914 21:37:50.468246   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:50.568267   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:51.167902   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:51.667525   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:52.168133   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:52.667309   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:53.168022   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:53.667816   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:54.167637   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:54.667912   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:55.167288   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:55.668227   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:56.168179   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:56.667315   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:57.167798   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:57.667279   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:58.167415   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:58.668135   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:59.167510   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:37:59.667574   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:38:00.168100   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:38:00.667969   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:38:01.167983   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:38:01.667621   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:38:02.167837   13879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:38:02.267437   13879 kubeadm.go:1081] duration metric: took 11.987189281s to wait for elevateKubeSystemPrivileges.
	I0914 21:38:02.267499   13879 kubeadm.go:406] StartCluster complete in 23.766884004s
	I0914 21:38:02.267523   13879 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:38:02.267653   13879 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:38:02.268051   13879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:38:02.268251   13879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 21:38:02.268328   13879 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0914 21:38:02.268422   13879 addons.go:69] Setting volumesnapshots=true in profile "addons-452179"
	I0914 21:38:02.268435   13879 addons.go:69] Setting ingress=true in profile "addons-452179"
	I0914 21:38:02.268451   13879 addons.go:69] Setting registry=true in profile "addons-452179"
	I0914 21:38:02.268460   13879 addons.go:231] Setting addon ingress=true in "addons-452179"
	I0914 21:38:02.268460   13879 addons.go:69] Setting ingress-dns=true in profile "addons-452179"
	I0914 21:38:02.268472   13879 addons.go:69] Setting storage-provisioner=true in profile "addons-452179"
	I0914 21:38:02.268471   13879 config.go:182] Loaded profile config "addons-452179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 21:38:02.268480   13879 addons.go:231] Setting addon ingress-dns=true in "addons-452179"
	I0914 21:38:02.268483   13879 addons.go:231] Setting addon storage-provisioner=true in "addons-452179"
	I0914 21:38:02.268482   13879 addons.go:69] Setting default-storageclass=true in profile "addons-452179"
	I0914 21:38:02.268504   13879 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-452179"
	I0914 21:38:02.268518   13879 addons.go:69] Setting cloud-spanner=true in profile "addons-452179"
	I0914 21:38:02.268527   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.268528   13879 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-452179"
	I0914 21:38:02.268580   13879 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-452179"
	I0914 21:38:02.268626   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.268501   13879 addons.go:69] Setting inspektor-gadget=true in profile "addons-452179"
	I0914 21:38:02.268732   13879 addons.go:231] Setting addon inspektor-gadget=true in "addons-452179"
	I0914 21:38:02.268773   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.268450   13879 addons.go:69] Setting metrics-server=true in profile "addons-452179"
	I0914 21:38:02.268827   13879 addons.go:231] Setting addon metrics-server=true in "addons-452179"
	I0914 21:38:02.268884   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.268527   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.268988   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.268528   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.269084   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.269093   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.269116   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.268533   13879 addons.go:231] Setting addon cloud-spanner=true in "addons-452179"
	I0914 21:38:02.268462   13879 addons.go:231] Setting addon registry=true in "addons-452179"
	I0914 21:38:02.269142   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.269161   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.269163   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.269167   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.269265   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.269284   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.269318   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.269348   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.269396   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.268446   13879 addons.go:231] Setting addon volumesnapshots=true in "addons-452179"
	I0914 21:38:02.269418   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.269444   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.269468   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.269482   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.269490   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.269513   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.268534   13879 addons.go:69] Setting helm-tiller=true in profile "addons-452179"
	I0914 21:38:02.269660   13879 addons.go:231] Setting addon helm-tiller=true in "addons-452179"
	I0914 21:38:02.269697   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.269030   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.268535   13879 addons.go:69] Setting gcp-auth=true in profile "addons-452179"
	I0914 21:38:02.269954   13879 mustload.go:65] Loading cluster: addons-452179
	I0914 21:38:02.270036   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.270062   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.270137   13879 config.go:182] Loaded profile config "addons-452179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 21:38:02.269130   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.288667   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I0914 21:38:02.288919   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0914 21:38:02.289014   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0914 21:38:02.289153   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41509
	I0914 21:38:02.289413   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.289432   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.289817   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.289895   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.289917   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.289928   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.289936   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.289948   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.290289   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.290296   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.290312   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.290316   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.290371   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.290382   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.290662   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.290973   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.291008   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.291064   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.291101   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.291267   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.291484   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I0914 21:38:02.291674   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.291703   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.291858   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0914 21:38:02.292013   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.292040   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.292191   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.292363   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.292524   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.292537   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.292797   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.292814   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.293160   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.293184   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.295717   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.295744   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.295754   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.295766   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.295854   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.295884   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.296252   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.296296   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.314741   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37695
	I0914 21:38:02.315485   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.316103   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.316121   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.316540   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.316983   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0914 21:38:02.317284   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I0914 21:38:02.317715   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.318667   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.318713   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.318728   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.318734   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43239
	I0914 21:38:02.319050   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.319154   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.319295   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.319633   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.319651   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.319788   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.319834   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.319981   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.320156   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.320226   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0914 21:38:02.320373   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0914 21:38:02.320787   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.320921   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.321149   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.321169   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.321361   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.321380   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.321554   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.321720   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.322144   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.322180   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.322366   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.322657   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.322674   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.323415   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0914 21:38:02.327248   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.327317   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.327372   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0914 21:38:02.328095   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.328194   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.330618   13879 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0914 21:38:02.328611   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.330696   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.331950   13879 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0914 21:38:02.328754   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.329272   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.331143   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.332345   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34415
	I0914 21:38:02.333591   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
	I0914 21:38:02.334131   13879 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 21:38:02.334246   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.334474   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.334625   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.335570   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.335584   13879 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 21:38:02.335596   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 21:38:02.335616   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.335589   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 21:38:02.335667   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.335162   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I0914 21:38:02.336250   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.336262   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.336485   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.336994   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.337099   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.337120   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.337487   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.337504   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.337822   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.337889   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.338087   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.338215   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.338462   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.338492   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.338581   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.340328   13879 out.go:177]   - Using image docker.io/registry:2.8.1
	I0914 21:38:02.339086   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.340121   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.341231   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.341726   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.343107   13879 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0914 21:38:02.341275   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.341363   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.341909   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.342063   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.342151   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.342288   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.343249   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
	I0914 21:38:02.345402   13879 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I0914 21:38:02.344362   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.344426   13879 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 21:38:02.344548   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.344375   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.345653   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.345717   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.345798   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.346773   13879 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0914 21:38:02.346785   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0914 21:38:02.346804   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.346972   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.348006   13879 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0914 21:38:02.349550   13879 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 21:38:02.349566   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 21:38:02.349584   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.348106   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 21:38:02.349638   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.348126   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.348160   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.349714   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.350037   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.350064   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.348200   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.348610   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.350241   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.350793   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.351355   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.351556   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.353413   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.354906   13879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0914 21:38:02.356071   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.357329   13879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 21:38:02.358631   13879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 21:38:02.356424   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45149
	I0914 21:38:02.358649   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.360288   13879 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 21:38:02.360310   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0914 21:38:02.360329   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.360295   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.357457   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.360406   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.360434   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.358305   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.360462   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.360486   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.359790   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.359852   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.359918   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.356976   13879 addons.go:231] Setting addon default-storageclass=true in "addons-452179"
	I0914 21:38:02.360602   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:02.360676   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.360966   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.361005   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.361119   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.361141   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.362098   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.362122   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.362127   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.362100   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I0914 21:38:02.362098   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.362461   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.362469   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.362688   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.362743   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.362834   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.362870   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.362922   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.363396   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.363418   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.363747   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.363913   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.364774   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.364916   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.366816   13879 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 21:38:02.365364   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.365390   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.365471   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.366511   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.368067   13879 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 21:38:02.368087   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 21:38:02.368094   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.368110   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.368772   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.369997   13879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 21:38:02.371124   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.369048   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.369143   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40587
	I0914 21:38:02.371630   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.372364   13879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 21:38:02.373547   13879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 21:38:02.372373   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.371741   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.372545   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.372760   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.373426   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I0914 21:38:02.376308   13879 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 21:38:02.375328   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
	I0914 21:38:02.375328   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.375335   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.375428   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.376478   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.376864   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.376878   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.377406   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.378292   13879 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 21:38:02.377566   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.379814   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.378467   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.378502   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.378757   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.380029   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.379783   13879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 21:38:02.381370   13879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 21:38:02.380342   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.380354   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.381609   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.383823   13879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 21:38:02.382727   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.382815   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.385235   13879 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 21:38:02.386298   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35987
	I0914 21:38:02.386599   13879 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0914 21:38:02.388036   13879 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0914 21:38:02.388055   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0914 21:38:02.388072   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.386603   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 21:38:02.388126   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.387266   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.387961   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.389777   13879 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 21:38:02.388507   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.391104   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.391105   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.391190   13879 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 21:38:02.391207   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 21:38:02.391220   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.391379   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.391546   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.391565   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.391753   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.391816   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.391835   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.392011   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.392020   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.392060   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.392264   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.392283   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.392482   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.392467   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.392648   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.392681   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:02.392718   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:02.393948   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.394263   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.394297   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.394431   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.394676   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.394853   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.395009   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.407572   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0914 21:38:02.407906   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:02.408342   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:02.408365   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:02.408620   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:02.408797   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:02.410117   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:02.410326   13879 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 21:38:02.410344   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 21:38:02.410361   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:02.413300   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.413703   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:02.413727   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:02.413877   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:02.414014   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:02.414144   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:02.414250   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:02.514529   13879 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-452179" context rescaled to 1 replicas
	I0914 21:38:02.514574   13879 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 21:38:02.516575   13879 out.go:177] * Verifying Kubernetes components...
	I0914 21:38:02.518141   13879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 21:38:02.557856   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 21:38:02.606011   13879 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 21:38:02.606030   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 21:38:02.615578   13879 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 21:38:02.615601   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 21:38:02.676753   13879 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 21:38:02.676775   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 21:38:02.685594   13879 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 21:38:02.685614   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 21:38:02.688977   13879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 21:38:02.689514   13879 node_ready.go:35] waiting up to 6m0s for node "addons-452179" to be "Ready" ...
	I0914 21:38:02.692320   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 21:38:02.721585   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 21:38:02.727293   13879 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 21:38:02.727312   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 21:38:02.730918   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 21:38:02.734361   13879 node_ready.go:49] node "addons-452179" has status "Ready":"True"
	I0914 21:38:02.734381   13879 node_ready.go:38] duration metric: took 44.846686ms waiting for node "addons-452179" to be "Ready" ...
	I0914 21:38:02.734392   13879 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 21:38:02.742134   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 21:38:02.762045   13879 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-452179" in "kube-system" namespace to be "Ready" ...
	I0914 21:38:02.775448   13879 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0914 21:38:02.775477   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0914 21:38:02.796068   13879 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 21:38:02.796089   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 21:38:02.828532   13879 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 21:38:02.828551   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 21:38:02.853912   13879 pod_ready.go:92] pod "etcd-addons-452179" in "kube-system" namespace has status "Ready":"True"
	I0914 21:38:02.853929   13879 pod_ready.go:81] duration metric: took 91.859584ms waiting for pod "etcd-addons-452179" in "kube-system" namespace to be "Ready" ...
	I0914 21:38:02.853941   13879 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-452179" in "kube-system" namespace to be "Ready" ...
	I0914 21:38:02.897845   13879 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 21:38:02.897864   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 21:38:02.910758   13879 pod_ready.go:92] pod "kube-apiserver-addons-452179" in "kube-system" namespace has status "Ready":"True"
	I0914 21:38:02.910775   13879 pod_ready.go:81] duration metric: took 56.827992ms waiting for pod "kube-apiserver-addons-452179" in "kube-system" namespace to be "Ready" ...
	I0914 21:38:02.910784   13879 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-452179" in "kube-system" namespace to be "Ready" ...
	I0914 21:38:02.950699   13879 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 21:38:02.950720   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0914 21:38:02.973165   13879 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 21:38:02.973185   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 21:38:02.982199   13879 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 21:38:02.982216   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 21:38:03.034401   13879 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 21:38:03.034424   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 21:38:03.034789   13879 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 21:38:03.034808   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 21:38:03.091202   13879 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 21:38:03.091238   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 21:38:03.098816   13879 pod_ready.go:92] pod "kube-controller-manager-addons-452179" in "kube-system" namespace has status "Ready":"True"
	I0914 21:38:03.098847   13879 pod_ready.go:81] duration metric: took 188.053765ms waiting for pod "kube-controller-manager-addons-452179" in "kube-system" namespace to be "Ready" ...
	I0914 21:38:03.098867   13879 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cgjkd" in "kube-system" namespace to be "Ready" ...
	I0914 21:38:03.108059   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 21:38:03.124886   13879 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 21:38:03.124930   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 21:38:03.129194   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 21:38:03.166956   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 21:38:03.167473   13879 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 21:38:03.167494   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 21:38:03.194847   13879 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 21:38:03.194873   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 21:38:03.243451   13879 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 21:38:03.243484   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 21:38:03.297468   13879 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 21:38:03.297494   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 21:38:03.303625   13879 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 21:38:03.303646   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 21:38:03.341252   13879 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 21:38:03.341280   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 21:38:03.371360   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 21:38:03.375530   13879 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 21:38:03.375547   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 21:38:03.415942   13879 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 21:38:03.415965   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 21:38:03.428240   13879 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 21:38:03.428258   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 21:38:03.473410   13879 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 21:38:03.473434   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 21:38:03.503692   13879 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 21:38:03.503713   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0914 21:38:03.539180   13879 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 21:38:03.539202   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 21:38:03.639098   13879 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 21:38:03.639121   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 21:38:03.645641   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 21:38:03.730876   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 21:38:05.598528   13879 pod_ready.go:102] pod "kube-proxy-cgjkd" in "kube-system" namespace has status "Ready":"False"
	I0914 21:38:07.726344   13879 pod_ready.go:102] pod "kube-proxy-cgjkd" in "kube-system" namespace has status "Ready":"False"
	I0914 21:38:07.777041   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.219146304s)
	I0914 21:38:07.777089   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:07.777103   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:07.777101   13879 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.088073922s)
	I0914 21:38:07.777125   13879 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0914 21:38:07.777449   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:07.777466   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:07.777475   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:07.777484   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:07.777495   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:07.777705   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:07.777748   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:07.777725   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:09.092914   13879 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 21:38:09.092957   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:09.095894   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:09.096386   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:09.096417   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:09.096629   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:09.096834   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:09.097015   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:09.097191   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:09.254164   13879 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 21:38:09.299461   13879 addons.go:231] Setting addon gcp-auth=true in "addons-452179"
	I0914 21:38:09.299527   13879 host.go:66] Checking if "addons-452179" exists ...
	I0914 21:38:09.299886   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:09.299921   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:09.316404   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44831
	I0914 21:38:09.316830   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:09.317365   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:09.317392   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:09.317753   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:09.318346   13879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:38:09.318380   13879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:38:09.333743   13879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0914 21:38:09.334134   13879 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:38:09.334566   13879 main.go:141] libmachine: Using API Version  1
	I0914 21:38:09.334593   13879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:38:09.334876   13879 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:38:09.335092   13879 main.go:141] libmachine: (addons-452179) Calling .GetState
	I0914 21:38:09.336867   13879 main.go:141] libmachine: (addons-452179) Calling .DriverName
	I0914 21:38:09.337116   13879 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 21:38:09.337142   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHHostname
	I0914 21:38:09.339751   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:09.340183   13879 main.go:141] libmachine: (addons-452179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c1:1e", ip: ""} in network mk-addons-452179: {Iface:virbr1 ExpiryTime:2023-09-14 22:37:20 +0000 UTC Type:0 Mac:52:54:00:d4:c1:1e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-452179 Clientid:01:52:54:00:d4:c1:1e}
	I0914 21:38:09.340254   13879 main.go:141] libmachine: (addons-452179) DBG | domain addons-452179 has defined IP address 192.168.39.45 and MAC address 52:54:00:d4:c1:1e in network mk-addons-452179
	I0914 21:38:09.340376   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHPort
	I0914 21:38:09.340578   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHKeyPath
	I0914 21:38:09.340743   13879 main.go:141] libmachine: (addons-452179) Calling .GetSSHUsername
	I0914 21:38:09.340905   13879 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/addons-452179/id_rsa Username:docker}
	I0914 21:38:09.943134   13879 pod_ready.go:102] pod "kube-proxy-cgjkd" in "kube-system" namespace has status "Ready":"False"
	I0914 21:38:10.069655   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.377309205s)
	I0914 21:38:10.069700   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.069713   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.069738   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.348120355s)
	I0914 21:38:10.069774   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.069804   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.069818   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.338881165s)
	I0914 21:38:10.069836   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.069896   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.069903   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.961817296s)
	I0914 21:38:10.069922   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.069867   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.32771099s)
	I0914 21:38:10.069940   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.069954   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.069971   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.070010   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.94078584s)
	I0914 21:38:10.070092   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.070103   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.070140   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.070175   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.070185   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.070194   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.070203   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.070217   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.698829836s)
	I0914 21:38:10.070228   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.070264   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.070274   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	W0914 21:38:10.070269   13879 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 21:38:10.070284   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.070297   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.070300   13879 retry.go:31] will retry after 340.975399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 21:38:10.070337   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.070354   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.070368   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.070400   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.42472686s)
	I0914 21:38:10.070430   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.070445   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.070473   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.070501   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.070515   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.070531   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.070088   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.903106426s)
	I0914 21:38:10.072459   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.072489   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.072535   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.072545   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.072564   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.072591   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.072601   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.072602   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.072619   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.072623   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.072632   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.072673   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.072685   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.073036   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.073055   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.073095   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.073107   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.073131   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.073144   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.073437   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.073480   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.073587   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.073599   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.073776   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.073853   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.073864   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.073877   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.074009   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.074020   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.074109   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.074147   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.074157   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.074166   13879 addons.go:467] Verifying addon registry=true in "addons-452179"
	I0914 21:38:10.077661   13879 out.go:177] * Verifying registry addon...
	I0914 21:38:10.074487   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.074531   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.074557   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.074576   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.076345   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.076372   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.077144   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.079043   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.079059   13879 addons.go:467] Verifying addon ingress=true in "addons-452179"
	I0914 21:38:10.079067   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.079077   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.079081   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.079090   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.079105   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.080586   13879 out.go:177] * Verifying ingress addon...
	I0914 21:38:10.079106   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.079093   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.079880   13879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 21:38:10.081916   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.081919   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.081945   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.081954   13879 addons.go:467] Verifying addon metrics-server=true in "addons-452179"
	I0914 21:38:10.082471   13879 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 21:38:10.082874   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.082891   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.106513   13879 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 21:38:10.106535   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:10.128864   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:10.132410   13879 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 21:38:10.132427   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:10.149008   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:10.411643   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 21:38:10.693357   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:10.693453   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:10.800591   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.069655828s)
	I0914 21:38:10.800645   13879 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.463490257s)
	I0914 21:38:10.802428   13879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 21:38:10.800645   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.804107   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.805695   13879 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0914 21:38:10.804457   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.804527   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.807201   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.807216   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:10.807230   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:10.807261   13879 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 21:38:10.807286   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 21:38:10.807502   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:10.807518   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:10.807528   13879 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-452179"
	I0914 21:38:10.807535   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:10.808926   13879 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 21:38:10.811183   13879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 21:38:10.859009   13879 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 21:38:10.859037   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:10.929323   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:10.967027   13879 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 21:38:10.967048   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 21:38:11.010973   13879 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 21:38:11.010998   13879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0914 21:38:11.067230   13879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 21:38:11.165873   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:11.169724   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:11.438833   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:11.657012   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:11.681619   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:11.943059   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:12.133951   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:12.161259   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:12.404443   13879 pod_ready.go:102] pod "kube-proxy-cgjkd" in "kube-system" namespace has status "Ready":"False"
	I0914 21:38:12.450978   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:12.507152   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.095426256s)
	I0914 21:38:12.507223   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:12.507246   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:12.507526   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:12.507570   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:12.507581   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:12.507600   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:12.507615   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:12.507919   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:12.507920   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:12.507949   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:12.634094   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:12.655957   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:12.971923   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:13.141519   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:13.153147   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:13.161705   13879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.094437987s)
	I0914 21:38:13.161745   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:13.161758   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:13.162048   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:13.162103   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:13.162128   13879 main.go:141] libmachine: Making call to close driver server
	I0914 21:38:13.162141   13879 main.go:141] libmachine: (addons-452179) Calling .Close
	I0914 21:38:13.162069   13879 main.go:141] libmachine: (addons-452179) DBG | Closing plugin on server side
	I0914 21:38:13.162402   13879 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:38:13.162416   13879 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:38:13.164064   13879 addons.go:467] Verifying addon gcp-auth=true in "addons-452179"
	I0914 21:38:13.165837   13879 out.go:177] * Verifying gcp-auth addon...
	I0914 21:38:13.167976   13879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 21:38:13.178389   13879 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 21:38:13.178411   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:13.191632   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:13.436110   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:13.647039   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:13.655967   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:13.695221   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:13.938058   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:14.133857   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:14.153975   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:14.195540   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:14.434932   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:14.633903   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:14.653337   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:14.696054   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:14.943433   13879 pod_ready.go:102] pod "kube-proxy-cgjkd" in "kube-system" namespace has status "Ready":"False"
	I0914 21:38:14.945397   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:15.134837   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:15.153812   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:15.196637   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:15.398614   13879 pod_ready.go:92] pod "kube-proxy-cgjkd" in "kube-system" namespace has status "Ready":"True"
	I0914 21:38:15.398644   13879 pod_ready.go:81] duration metric: took 12.299765471s waiting for pod "kube-proxy-cgjkd" in "kube-system" namespace to be "Ready" ...
	I0914 21:38:15.398694   13879 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-452179" in "kube-system" namespace to be "Ready" ...
	I0914 21:38:15.406351   13879 pod_ready.go:92] pod "kube-scheduler-addons-452179" in "kube-system" namespace has status "Ready":"True"
	I0914 21:38:15.406379   13879 pod_ready.go:81] duration metric: took 7.67416ms waiting for pod "kube-scheduler-addons-452179" in "kube-system" namespace to be "Ready" ...
	I0914 21:38:15.406391   13879 pod_ready.go:38] duration metric: took 12.671978534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 21:38:15.406411   13879 api_server.go:52] waiting for apiserver process to appear ...
	I0914 21:38:15.406465   13879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 21:38:15.423863   13879 api_server.go:72] duration metric: took 12.909258027s to wait for apiserver process to appear ...
	I0914 21:38:15.423885   13879 api_server.go:88] waiting for apiserver healthz status ...
	I0914 21:38:15.423902   13879 api_server.go:253] Checking apiserver healthz at https://192.168.39.45:8443/healthz ...
	I0914 21:38:15.431106   13879 api_server.go:279] https://192.168.39.45:8443/healthz returned 200:
	ok
	I0914 21:38:15.433257   13879 api_server.go:141] control plane version: v1.28.1
	I0914 21:38:15.433290   13879 api_server.go:131] duration metric: took 9.392988ms to wait for apiserver health ...
	I0914 21:38:15.433299   13879 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 21:38:15.436378   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:15.443915   13879 system_pods.go:59] 18 kube-system pods found
	I0914 21:38:15.443945   13879 system_pods.go:61] "coredns-5dd5756b68-jzfkt" [d8649ca6-17a0-458f-a086-4a5c1b4d342b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 21:38:15.443958   13879 system_pods.go:61] "coredns-5dd5756b68-rd52t" [472afaa7-39b2-4b32-b2d5-93b425456beb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 21:38:15.443969   13879 system_pods.go:61] "csi-hostpath-attacher-0" [3b9084aa-8aa0-4a8e-8c74-ac88322a1cf6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 21:38:15.443984   13879 system_pods.go:61] "csi-hostpath-resizer-0" [ce714652-38e0-4ed0-a3b7-159f657b51d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 21:38:15.443995   13879 system_pods.go:61] "csi-hostpathplugin-zfplx" [c3088cde-8d05-45e6-a95c-bba2fa7fdece] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 21:38:15.444008   13879 system_pods.go:61] "etcd-addons-452179" [aebf932a-c6aa-4177-a146-68a686009632] Running
	I0914 21:38:15.444018   13879 system_pods.go:61] "kube-apiserver-addons-452179" [14d40460-77fd-4594-9b6e-f907e2c84371] Running
	I0914 21:38:15.444028   13879 system_pods.go:61] "kube-controller-manager-addons-452179" [680adb45-c17d-4025-98a3-486fcec0acb8] Running
	I0914 21:38:15.444041   13879 system_pods.go:61] "kube-ingress-dns-minikube" [1247df4f-da4c-4014-984e-f43e4db830c3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0914 21:38:15.444052   13879 system_pods.go:61] "kube-proxy-cgjkd" [0b1bf83f-fb5e-4585-9ab3-578ed9dee271] Running
	I0914 21:38:15.444062   13879 system_pods.go:61] "kube-scheduler-addons-452179" [65a7efd7-8fb0-40b1-8bb8-bd850217b8b4] Running
	I0914 21:38:15.444075   13879 system_pods.go:61] "metrics-server-7c66d45ddc-h6p2j" [e307d1a4-c43b-46bb-b55d-17ba22ab836c] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 21:38:15.444087   13879 system_pods.go:61] "registry-5hndr" [48f881de-7dbb-4535-8516-d1f43d100169] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 21:38:15.444100   13879 system_pods.go:61] "registry-proxy-4d4zp" [ae93be0b-e4f3-45d1-a641-95ee97d410d2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 21:38:15.444113   13879 system_pods.go:61] "snapshot-controller-58dbcc7b99-l8kdc" [9b2fc6c2-70cb-430c-85f5-f38357aa4635] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 21:38:15.444126   13879 system_pods.go:61] "snapshot-controller-58dbcc7b99-x2tng" [1134441a-bcf0-4385-92bc-0d97ac0dca19] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 21:38:15.444138   13879 system_pods.go:61] "storage-provisioner" [c8751287-1dc6-4334-9ede-251e676ff2bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 21:38:15.444162   13879 system_pods.go:61] "tiller-deploy-7b677967b9-m6w86" [221ba0c5-6fb4-46ff-95bc-9ec082635102] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0914 21:38:15.444169   13879 system_pods.go:74] duration metric: took 10.863657ms to wait for pod list to return data ...
	I0914 21:38:15.444178   13879 default_sa.go:34] waiting for default service account to be created ...
	I0914 21:38:15.446911   13879 default_sa.go:45] found service account: "default"
	I0914 21:38:15.446925   13879 default_sa.go:55] duration metric: took 2.742027ms for default service account to be created ...
	I0914 21:38:15.446931   13879 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 21:38:15.456300   13879 system_pods.go:86] 18 kube-system pods found
	I0914 21:38:15.456327   13879 system_pods.go:89] "coredns-5dd5756b68-jzfkt" [d8649ca6-17a0-458f-a086-4a5c1b4d342b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 21:38:15.456341   13879 system_pods.go:89] "coredns-5dd5756b68-rd52t" [472afaa7-39b2-4b32-b2d5-93b425456beb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 21:38:15.456352   13879 system_pods.go:89] "csi-hostpath-attacher-0" [3b9084aa-8aa0-4a8e-8c74-ac88322a1cf6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 21:38:15.456363   13879 system_pods.go:89] "csi-hostpath-resizer-0" [ce714652-38e0-4ed0-a3b7-159f657b51d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 21:38:15.456394   13879 system_pods.go:89] "csi-hostpathplugin-zfplx" [c3088cde-8d05-45e6-a95c-bba2fa7fdece] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 21:38:15.456408   13879 system_pods.go:89] "etcd-addons-452179" [aebf932a-c6aa-4177-a146-68a686009632] Running
	I0914 21:38:15.456416   13879 system_pods.go:89] "kube-apiserver-addons-452179" [14d40460-77fd-4594-9b6e-f907e2c84371] Running
	I0914 21:38:15.456424   13879 system_pods.go:89] "kube-controller-manager-addons-452179" [680adb45-c17d-4025-98a3-486fcec0acb8] Running
	I0914 21:38:15.456437   13879 system_pods.go:89] "kube-ingress-dns-minikube" [1247df4f-da4c-4014-984e-f43e4db830c3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0914 21:38:15.456447   13879 system_pods.go:89] "kube-proxy-cgjkd" [0b1bf83f-fb5e-4585-9ab3-578ed9dee271] Running
	I0914 21:38:15.456462   13879 system_pods.go:89] "kube-scheduler-addons-452179" [65a7efd7-8fb0-40b1-8bb8-bd850217b8b4] Running
	I0914 21:38:15.456472   13879 system_pods.go:89] "metrics-server-7c66d45ddc-h6p2j" [e307d1a4-c43b-46bb-b55d-17ba22ab836c] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 21:38:15.456484   13879 system_pods.go:89] "registry-5hndr" [48f881de-7dbb-4535-8516-d1f43d100169] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 21:38:15.456496   13879 system_pods.go:89] "registry-proxy-4d4zp" [ae93be0b-e4f3-45d1-a641-95ee97d410d2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 21:38:15.456509   13879 system_pods.go:89] "snapshot-controller-58dbcc7b99-l8kdc" [9b2fc6c2-70cb-430c-85f5-f38357aa4635] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 21:38:15.456522   13879 system_pods.go:89] "snapshot-controller-58dbcc7b99-x2tng" [1134441a-bcf0-4385-92bc-0d97ac0dca19] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 21:38:15.456535   13879 system_pods.go:89] "storage-provisioner" [c8751287-1dc6-4334-9ede-251e676ff2bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 21:38:15.456547   13879 system_pods.go:89] "tiller-deploy-7b677967b9-m6w86" [221ba0c5-6fb4-46ff-95bc-9ec082635102] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0914 21:38:15.456558   13879 system_pods.go:126] duration metric: took 9.620835ms to wait for k8s-apps to be running ...
	I0914 21:38:15.456569   13879 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 21:38:15.456617   13879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 21:38:15.482434   13879 system_svc.go:56] duration metric: took 25.855346ms WaitForService to wait for kubelet.
	I0914 21:38:15.482465   13879 kubeadm.go:581] duration metric: took 12.96786499s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 21:38:15.482490   13879 node_conditions.go:102] verifying NodePressure condition ...
	I0914 21:38:15.487074   13879 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 21:38:15.487104   13879 node_conditions.go:123] node cpu capacity is 2
	I0914 21:38:15.487120   13879 node_conditions.go:105] duration metric: took 4.624448ms to run NodePressure ...
	I0914 21:38:15.487132   13879 start.go:228] waiting for startup goroutines ...
	I0914 21:38:15.634498   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:15.654203   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:15.696832   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:15.938539   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:16.134383   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:16.155819   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:16.195752   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:16.435374   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:16.636685   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:16.655516   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:16.696796   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:16.936535   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:17.138336   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:17.156478   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:17.197395   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:17.435439   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:17.650431   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:17.654462   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:17.700685   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:17.934809   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:18.137938   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:18.162041   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:18.205315   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:18.436681   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:18.633155   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:18.654725   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:18.695514   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:18.935187   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:19.159215   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:19.164865   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:19.207520   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:19.450022   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:19.637565   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:19.656878   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:19.696044   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:19.934687   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:20.136048   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:20.154555   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:20.195287   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:20.437553   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:20.636169   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:20.655769   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:20.696357   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:20.934746   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:21.133698   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:21.153357   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:21.194912   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:21.435733   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:21.634289   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:21.654155   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:21.695501   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:21.935351   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:22.134176   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:22.153743   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:22.195528   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:22.434239   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:22.634631   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:22.654607   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:22.695984   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:22.935975   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:23.134118   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:23.153130   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:23.196308   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:23.772561   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:23.780836   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:23.781317   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:23.784148   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:23.935526   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:24.134209   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:24.153942   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:24.195280   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:24.435975   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:24.635887   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:24.653217   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:24.695194   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:24.936323   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:25.134511   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:25.153878   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:25.195956   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:25.435557   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:25.633849   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:25.653783   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:25.695525   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:25.935383   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:26.134488   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:26.154210   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:26.196625   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:26.434475   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:26.634302   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:26.655552   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:26.696617   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:26.935101   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:27.177845   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:27.179804   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:27.195524   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:27.435426   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:27.638173   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:27.656593   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:27.695797   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:27.934961   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:28.133910   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:28.158251   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:28.196890   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:28.435560   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:28.633130   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:28.653658   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:28.695608   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:28.935252   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:29.134354   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:29.153560   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:29.195156   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:29.444681   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:29.634132   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:29.654147   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:29.695619   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:29.935961   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:30.135316   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:30.155593   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:30.194983   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:30.437462   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:30.634516   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:30.653233   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:30.696505   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:30.934372   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:31.134083   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:31.157835   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:31.198015   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:31.435979   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:31.634554   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:31.656674   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:31.705406   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:31.935958   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:32.133779   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:32.153002   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:32.195272   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:32.436086   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:32.639427   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:32.654903   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:32.697470   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:32.935298   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:33.136349   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:33.154791   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:33.195793   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:33.443291   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:33.649550   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:33.653828   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:33.695864   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:33.934752   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:34.134024   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:34.153822   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:34.195673   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:34.435144   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:34.635170   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:34.653761   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:34.696267   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:34.935517   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:35.133899   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:35.154828   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:35.198045   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:35.435181   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:35.634165   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:35.654037   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:35.695703   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:35.937861   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:36.133808   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:36.154478   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:36.199619   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:36.434903   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:36.637216   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:36.653867   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:36.695983   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:36.936222   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:37.133866   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:37.153326   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:37.196390   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:37.435405   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:37.633146   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:37.653699   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:37.695837   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:37.935373   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:38.134560   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:38.154339   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:38.196563   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:38.435622   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:38.635154   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:38.654391   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:38.696360   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:38.935969   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:39.135343   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:39.153848   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:39.196366   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:39.435292   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:39.634356   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:39.654052   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:39.695348   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:39.936089   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:40.133981   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:40.153695   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:40.195027   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:40.435161   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:40.633341   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:40.653552   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:40.695239   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:41.367812   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:41.369837   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:41.371535   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:41.371849   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:41.435307   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:41.634644   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:41.654771   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:41.695615   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:41.935046   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:42.136234   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:42.154447   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:42.195884   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:42.435607   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:42.635886   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:42.653076   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:42.695918   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:42.935290   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:43.133655   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:43.153237   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:43.195052   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:43.436562   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:43.790987   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:43.791274   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:43.791614   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:43.935663   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:44.134576   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:44.155509   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:44.195744   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:44.436356   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:44.634507   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:44.654822   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:44.697031   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:44.937511   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:45.133431   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:45.153818   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:45.196463   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:45.434725   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:45.634807   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:45.653162   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:45.695934   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:45.934875   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:46.133907   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:46.153310   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:46.196971   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:46.434971   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:46.633649   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:46.653781   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:46.695683   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:46.934493   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:47.135758   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:47.152749   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:47.196605   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:47.435868   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:47.633426   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:47.654872   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:47.695790   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:47.935300   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:48.134289   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:48.154019   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:48.195698   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:48.434655   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:48.636638   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:48.656622   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:48.700321   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:48.941361   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:49.133501   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:49.152966   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:49.195512   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:49.436017   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:49.633397   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:49.656060   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:49.706830   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:50.197234   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:50.197730   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:50.203460   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:50.209674   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:50.434902   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:50.633439   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:50.652868   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:50.695506   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:50.944836   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:51.135105   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:51.153237   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:51.196080   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:51.441724   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:51.633813   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:51.653280   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:51.695070   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:51.935979   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:52.134049   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:52.153901   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:52.196087   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:52.436368   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:52.634029   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:52.659094   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:52.697132   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:52.935592   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:53.134138   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:53.154864   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:53.195454   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:53.436804   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:53.635586   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:53.652897   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:53.696233   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:53.934686   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:54.135635   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:54.154961   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:54.195426   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:54.435000   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:54.634728   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:54.653282   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:54.695461   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:54.935108   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:55.133745   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:55.153106   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:55.195824   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:55.435160   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:55.633967   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:55.654031   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:55.695424   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:55.936232   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:56.135635   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:56.153771   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:56.198058   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:56.435288   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:56.633791   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:56.653267   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:56.695396   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:56.935991   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:57.134198   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:57.157383   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:57.195256   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:57.434830   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:57.633512   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:57.653215   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:57.696214   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:57.940952   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:58.133993   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:58.154248   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:58.195734   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:58.434499   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:58.635833   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:58.653807   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:58.695140   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:58.937336   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:59.134288   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:59.156179   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:59.196242   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:59.435274   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:38:59.635245   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:38:59.655937   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:38:59.695853   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:38:59.935596   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:00.135831   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:00.153321   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:00.194955   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:00.435451   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:00.634539   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:00.652984   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:00.701005   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:00.935892   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:01.134291   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:01.153927   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:01.196042   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:01.435129   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:01.634089   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:01.658306   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:01.696599   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:01.944800   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:02.134486   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:02.154676   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:02.195948   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:02.449371   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:02.634667   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:02.652934   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:02.695044   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:02.935273   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:03.134868   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:03.155446   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:03.195001   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:03.435340   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:03.634489   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:03.655688   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:03.701941   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:03.935072   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:04.134925   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:04.154400   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:04.195280   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:04.435072   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:04.634593   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:04.654546   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:04.696522   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:04.934856   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:05.134135   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:05.153527   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:05.195314   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:05.435670   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:05.635044   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:05.653369   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:05.694912   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:05.935279   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:06.133875   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:06.157412   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:06.199404   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:06.439358   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:06.634712   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:06.653192   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:06.696864   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:06.935434   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:07.134437   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:07.153579   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:07.199658   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:07.434892   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:07.637723   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 21:39:07.653290   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:07.698505   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:07.935933   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:08.133798   13879 kapi.go:107] duration metric: took 58.053912047s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 21:39:08.153980   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:08.195566   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:08.436002   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:08.654208   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:08.698608   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:08.938596   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:09.169159   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:09.198191   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:09.436122   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:09.653606   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:09.696431   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:09.941286   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:10.154096   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:10.195443   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:10.434655   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:10.655257   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:10.696472   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:10.934876   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:11.153372   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:11.195339   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:11.442637   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:11.666654   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:11.710559   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:11.936003   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:12.391499   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:12.391535   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:12.438182   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:12.653016   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:12.696033   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:12.936970   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:13.153857   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:13.195649   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:13.434866   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:13.654697   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:13.695191   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:13.934918   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:14.154318   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:14.195107   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:14.435102   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:14.653857   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:14.697573   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:14.938599   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:15.154044   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:15.196335   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:15.436718   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:15.653297   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:15.694581   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:15.935135   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:16.158379   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:16.195048   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:16.435019   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:16.653725   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:16.696664   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:16.934866   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:17.154112   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:17.196416   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:17.441024   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:17.653369   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:17.694961   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:17.935067   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 21:39:18.156198   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:18.195732   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:18.435312   13879 kapi.go:107] duration metric: took 1m7.624122445s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 21:39:18.653654   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:18.695541   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:19.157057   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:19.196536   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:19.654004   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:19.695607   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:20.154231   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:20.196558   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:20.653853   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:20.695599   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:21.154020   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:21.195574   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:21.653797   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:21.695329   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:22.154499   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:22.195357   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:22.756173   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:22.756835   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:23.153460   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:23.198670   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:23.655830   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:23.695476   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:24.155526   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:24.196758   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:24.657540   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:24.697128   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:25.153455   13879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 21:39:25.199706   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:25.653895   13879 kapi.go:107] duration metric: took 1m15.571420603s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 21:39:25.695754   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:26.195594   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:26.698251   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:27.197070   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:27.695314   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:28.196112   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:28.695106   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:29.196145   13879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 21:39:29.695944   13879 kapi.go:107] duration metric: took 1m16.52796608s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 21:39:29.697789   13879 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-452179 cluster.
	I0914 21:39:29.699240   13879 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 21:39:29.700603   13879 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 21:39:29.702253   13879 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, helm-tiller, cloud-spanner, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0914 21:39:29.703736   13879 addons.go:502] enable addons completed in 1m27.435394446s: enabled=[ingress-dns storage-provisioner helm-tiller cloud-spanner inspektor-gadget metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0914 21:39:29.703779   13879 start.go:233] waiting for cluster config update ...
	I0914 21:39:29.703803   13879 start.go:242] writing updated cluster config ...
	I0914 21:39:29.704049   13879 ssh_runner.go:195] Run: rm -f paused
	I0914 21:39:29.754811   13879 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 21:39:29.756579   13879 out.go:177] * Done! kubectl is now configured to use "addons-452179" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 21:37:16 UTC, ends at Thu 2023-09-14 21:42:20 UTC. --
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.771231035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b0472a5eb0cc95eeb9bb6aeb388bfae35297ca0a7b302c634ff775fc5dbb7dc,PodSandboxId:708451aa5f47f55ac85c17b9d7c3b3f649812745ac41fbeca424c96282023b4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694727733033033196,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5mp8s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 729d1c2d-d3c9-4a7e-b313-4d9f827bb87c,},Annotations:map[string]string{io.kubernetes.container.hash: 44136a17,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b29556ca6f5f9661435599788301db15e7271d56a47246723c0255e62ee20,PodSandboxId:b5c32cf2c4717b687900034742cc854ef9244e173d88dc7c2f316ad658622b90,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694727592866617509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1920c797-5282-4538-b571-a26c3d4d1b76,},Annotations:map[string]string{io.kubernet
es.container.hash: c262180f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6722a3ffe86c24d05d9f1715e96acb9a0af4854e67a603d8163827c9f07518e,PodSandboxId:dc145b18a95a4a09e05d6aad7c3002bf1fe1bfa69374dcec17342320cabeff11,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694727579158158444,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-4kfkp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fc64aa4-6651-4623-8c18-167115ff4449,},Annotations:map[string]string{io.kubernetes.container.hash: 19b7281d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51db16fcebe60901e08f8d3abfde6420efeee9bef619f9d076eb2e555c47d7e,PodSandboxId:c977c5747138b8908e9e2f27f696895974ae1081189b04808d3430be94dfe64f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694727568558120073,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvr4v,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ad5b16b3-9795-43b8-ae19-bb15fa4ecc73,},Annotations:map[string]string{io.kubernetes.container.hash: a0ba0dfd,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35903bcba5206bd51c27bc99df29e0bddce59df1a5b36d7bcd1d573d6252f018,PodSandboxId:99c50d22c76dfc9801db12809356c7d3c6344b55eb4cb2cd2f308f3323babc3a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16947275
48898084879,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2ww7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39d144fb-e6f3-4345-a42e-dc44bb7e131c,},Annotations:map[string]string{io.kubernetes.container.hash: ed81a48a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb2b6b91cc5fec720d97bce3928b3c78c9bc249ffdadc93db23b63865ada079,PodSandboxId:4d2873d50aba6c3d04731118332cda7657569f83ca67d30d035f4297fb7bdf6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694727533832878277,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c6694,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c27a9bb7-8bdd-4103-89d5-338eb661d579,},Annotations:map[string]string{io.kubernetes.container.hash: 875134da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805f21f5b3fce6f425743640b5026e55f3dd52c971a228d8d593a0c4aedcf82a,PodSandboxId:a6f47293a240594b93615bcefbe929ffcc3f8dd2ad3619598cc3bc1973e89130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694727497770414670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8751287-1dc6-4334-9ede-251e676ff2bb,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1dbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb04e29d061f63c236201be7e190fd974c4ce5704309b89dbdeb711968c90f3,PodSandboxId:4ee337e1606b8803cfa481701ad1d7d0ebd95f5c8565a540bb933f45d7093786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694727492575460425,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgjkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1bf83f-fb5e-4585-9ab3-578ed9dee271,},Annotations:map[string]string{io.kubernetes.container.hash: d2e904a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5799594f48ccb62d6a98e837eaea4930031f459d533b8dba3a9d6ede9eec4ff,PodSandboxId:e1ddea7ac477bdfffee27342b69e1685175e34c62f614c6f4d24c4074b4170e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694727485750236288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jzfkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8649ca6-17a0-458f-a086-4a5c1b4d342b,},Annotations:map[string]string{io.kubernetes.container.hash: 59c40c08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:680b5c9d460585f47a011604a8875e3dba0c7f1821a7f767ee2fe829d36521e6,PodSandboxId:5faedb9796fdaba9f723e782763392669a99ae31ce667cd8b0242fefa642c45f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73
af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694727463503237593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9d5ad6f28803be1b53a0d76b1a0fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eae8d413418b6c8bad6725825734bd33089cb4b5d5b3230589d73438cd4f275,PodSandboxId:aa774c0d11ba9865da6aab319134569db9d10735df4f4e5a8130acb7b13309dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa1
2288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694727463310900061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa8835e8c343ada5199584cff5c23490,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff2c45235b9cfd022cebb31954f96c53d2ca8d6c886e59e3e4af967fc2a7a611,PodSandboxId:3d622d9dae87dcc7c2d5aac185ea114ef79e3c1037b1e32829cb296abb096e82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[s
tring]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694727463190901995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa107110a4b45f89241ef2f8fb1d2da,},Annotations:map[string]string{io.kubernetes.container.hash: f1c75d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f8ddd32f689f7f49e207cb2a74a22a2806f8ec19ab7f81b3d74d0ddff10b8d,PodSandboxId:dbe1e94fc82c76aa3cc25d46568b76eeedba27f97033d556bbff9ad377d6da80,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694727463072223266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e361d43dd4089271ee53c6d43310ad6f,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a6673c3f-6715-42b5-8e69-7621ca823584 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.803897119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8a7d5e8-a30a-4905-b9c3-7d3a3b483752 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.803980179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8a7d5e8-a30a-4905-b9c3-7d3a3b483752 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.804246077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b0472a5eb0cc95eeb9bb6aeb388bfae35297ca0a7b302c634ff775fc5dbb7dc,PodSandboxId:708451aa5f47f55ac85c17b9d7c3b3f649812745ac41fbeca424c96282023b4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694727733033033196,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5mp8s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 729d1c2d-d3c9-4a7e-b313-4d9f827bb87c,},Annotations:map[string]string{io.kubernetes.container.hash: 44136a17,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b29556ca6f5f9661435599788301db15e7271d56a47246723c0255e62ee20,PodSandboxId:b5c32cf2c4717b687900034742cc854ef9244e173d88dc7c2f316ad658622b90,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694727592866617509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1920c797-5282-4538-b571-a26c3d4d1b76,},Annotations:map[string]string{io.kubernet
es.container.hash: c262180f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6722a3ffe86c24d05d9f1715e96acb9a0af4854e67a603d8163827c9f07518e,PodSandboxId:dc145b18a95a4a09e05d6aad7c3002bf1fe1bfa69374dcec17342320cabeff11,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694727579158158444,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-4kfkp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fc64aa4-6651-4623-8c18-167115ff4449,},Annotations:map[string]string{io.kubernetes.container.hash: 19b7281d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51db16fcebe60901e08f8d3abfde6420efeee9bef619f9d076eb2e555c47d7e,PodSandboxId:c977c5747138b8908e9e2f27f696895974ae1081189b04808d3430be94dfe64f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694727568558120073,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvr4v,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ad5b16b3-9795-43b8-ae19-bb15fa4ecc73,},Annotations:map[string]string{io.kubernetes.container.hash: a0ba0dfd,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35903bcba5206bd51c27bc99df29e0bddce59df1a5b36d7bcd1d573d6252f018,PodSandboxId:99c50d22c76dfc9801db12809356c7d3c6344b55eb4cb2cd2f308f3323babc3a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16947275
48898084879,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2ww7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39d144fb-e6f3-4345-a42e-dc44bb7e131c,},Annotations:map[string]string{io.kubernetes.container.hash: ed81a48a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb2b6b91cc5fec720d97bce3928b3c78c9bc249ffdadc93db23b63865ada079,PodSandboxId:4d2873d50aba6c3d04731118332cda7657569f83ca67d30d035f4297fb7bdf6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694727533832878277,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c6694,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c27a9bb7-8bdd-4103-89d5-338eb661d579,},Annotations:map[string]string{io.kubernetes.container.hash: 875134da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805f21f5b3fce6f425743640b5026e55f3dd52c971a228d8d593a0c4aedcf82a,PodSandboxId:a6f47293a240594b93615bcefbe929ffcc3f8dd2ad3619598cc3bc1973e89130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694727497770414670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8751287-1dc6-4334-9ede-251e676ff2bb,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1dbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb04e29d061f63c236201be7e190fd974c4ce5704309b89dbdeb711968c90f3,PodSandboxId:4ee337e1606b8803cfa481701ad1d7d0ebd95f5c8565a540bb933f45d7093786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694727492575460425,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgjkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1bf83f-fb5e-4585-9ab3-578ed9dee271,},Annotations:map[string]string{io.kubernetes.container.hash: d2e904a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5799594f48ccb62d6a98e837eaea4930031f459d533b8dba3a9d6ede9eec4ff,PodSandboxId:e1ddea7ac477bdfffee27342b69e1685175e34c62f614c6f4d24c4074b4170e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694727485750236288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jzfkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8649ca6-17a0-458f-a086-4a5c1b4d342b,},Annotations:map[string]string{io.kubernetes.container.hash: 59c40c08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:680b5c9d460585f47a011604a8875e3dba0c7f1821a7f767ee2fe829d36521e6,PodSandboxId:5faedb9796fdaba9f723e782763392669a99ae31ce667cd8b0242fefa642c45f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73
af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694727463503237593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9d5ad6f28803be1b53a0d76b1a0fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eae8d413418b6c8bad6725825734bd33089cb4b5d5b3230589d73438cd4f275,PodSandboxId:aa774c0d11ba9865da6aab319134569db9d10735df4f4e5a8130acb7b13309dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa1
2288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694727463310900061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa8835e8c343ada5199584cff5c23490,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff2c45235b9cfd022cebb31954f96c53d2ca8d6c886e59e3e4af967fc2a7a611,PodSandboxId:3d622d9dae87dcc7c2d5aac185ea114ef79e3c1037b1e32829cb296abb096e82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[s
tring]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694727463190901995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa107110a4b45f89241ef2f8fb1d2da,},Annotations:map[string]string{io.kubernetes.container.hash: f1c75d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f8ddd32f689f7f49e207cb2a74a22a2806f8ec19ab7f81b3d74d0ddff10b8d,PodSandboxId:dbe1e94fc82c76aa3cc25d46568b76eeedba27f97033d556bbff9ad377d6da80,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694727463072223266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e361d43dd4089271ee53c6d43310ad6f,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8a7d5e8-a30a-4905-b9c3-7d3a3b483752 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.835706078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4db3ee5d-fd52-4d28-b41f-b01a130f6719 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.835799476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4db3ee5d-fd52-4d28-b41f-b01a130f6719 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.836084370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b0472a5eb0cc95eeb9bb6aeb388bfae35297ca0a7b302c634ff775fc5dbb7dc,PodSandboxId:708451aa5f47f55ac85c17b9d7c3b3f649812745ac41fbeca424c96282023b4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694727733033033196,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5mp8s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 729d1c2d-d3c9-4a7e-b313-4d9f827bb87c,},Annotations:map[string]string{io.kubernetes.container.hash: 44136a17,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b29556ca6f5f9661435599788301db15e7271d56a47246723c0255e62ee20,PodSandboxId:b5c32cf2c4717b687900034742cc854ef9244e173d88dc7c2f316ad658622b90,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694727592866617509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1920c797-5282-4538-b571-a26c3d4d1b76,},Annotations:map[string]string{io.kubernet
es.container.hash: c262180f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6722a3ffe86c24d05d9f1715e96acb9a0af4854e67a603d8163827c9f07518e,PodSandboxId:dc145b18a95a4a09e05d6aad7c3002bf1fe1bfa69374dcec17342320cabeff11,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694727579158158444,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-4kfkp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fc64aa4-6651-4623-8c18-167115ff4449,},Annotations:map[string]string{io.kubernetes.container.hash: 19b7281d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51db16fcebe60901e08f8d3abfde6420efeee9bef619f9d076eb2e555c47d7e,PodSandboxId:c977c5747138b8908e9e2f27f696895974ae1081189b04808d3430be94dfe64f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694727568558120073,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvr4v,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ad5b16b3-9795-43b8-ae19-bb15fa4ecc73,},Annotations:map[string]string{io.kubernetes.container.hash: a0ba0dfd,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35903bcba5206bd51c27bc99df29e0bddce59df1a5b36d7bcd1d573d6252f018,PodSandboxId:99c50d22c76dfc9801db12809356c7d3c6344b55eb4cb2cd2f308f3323babc3a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16947275
48898084879,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2ww7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39d144fb-e6f3-4345-a42e-dc44bb7e131c,},Annotations:map[string]string{io.kubernetes.container.hash: ed81a48a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb2b6b91cc5fec720d97bce3928b3c78c9bc249ffdadc93db23b63865ada079,PodSandboxId:4d2873d50aba6c3d04731118332cda7657569f83ca67d30d035f4297fb7bdf6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694727533832878277,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c6694,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c27a9bb7-8bdd-4103-89d5-338eb661d579,},Annotations:map[string]string{io.kubernetes.container.hash: 875134da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805f21f5b3fce6f425743640b5026e55f3dd52c971a228d8d593a0c4aedcf82a,PodSandboxId:a6f47293a240594b93615bcefbe929ffcc3f8dd2ad3619598cc3bc1973e89130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694727497770414670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8751287-1dc6-4334-9ede-251e676ff2bb,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1dbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb04e29d061f63c236201be7e190fd974c4ce5704309b89dbdeb711968c90f3,PodSandboxId:4ee337e1606b8803cfa481701ad1d7d0ebd95f5c8565a540bb933f45d7093786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694727492575460425,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgjkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1bf83f-fb5e-4585-9ab3-578ed9dee271,},Annotations:map[string]string{io.kubernetes.container.hash: d2e904a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5799594f48ccb62d6a98e837eaea4930031f459d533b8dba3a9d6ede9eec4ff,PodSandboxId:e1ddea7ac477bdfffee27342b69e1685175e34c62f614c6f4d24c4074b4170e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694727485750236288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jzfkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8649ca6-17a0-458f-a086-4a5c1b4d342b,},Annotations:map[string]string{io.kubernetes.container.hash: 59c40c08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:680b5c9d460585f47a011604a8875e3dba0c7f1821a7f767ee2fe829d36521e6,PodSandboxId:5faedb9796fdaba9f723e782763392669a99ae31ce667cd8b0242fefa642c45f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73
af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694727463503237593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9d5ad6f28803be1b53a0d76b1a0fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eae8d413418b6c8bad6725825734bd33089cb4b5d5b3230589d73438cd4f275,PodSandboxId:aa774c0d11ba9865da6aab319134569db9d10735df4f4e5a8130acb7b13309dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa1
2288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694727463310900061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa8835e8c343ada5199584cff5c23490,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff2c45235b9cfd022cebb31954f96c53d2ca8d6c886e59e3e4af967fc2a7a611,PodSandboxId:3d622d9dae87dcc7c2d5aac185ea114ef79e3c1037b1e32829cb296abb096e82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[s
tring]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694727463190901995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa107110a4b45f89241ef2f8fb1d2da,},Annotations:map[string]string{io.kubernetes.container.hash: f1c75d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f8ddd32f689f7f49e207cb2a74a22a2806f8ec19ab7f81b3d74d0ddff10b8d,PodSandboxId:dbe1e94fc82c76aa3cc25d46568b76eeedba27f97033d556bbff9ad377d6da80,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694727463072223266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e361d43dd4089271ee53c6d43310ad6f,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4db3ee5d-fd52-4d28-b41f-b01a130f6719 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.866263191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=85d22fc3-446c-488f-b8e7-4614b22e6c7f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.866505771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=85d22fc3-446c-488f-b8e7-4614b22e6c7f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.866893060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b0472a5eb0cc95eeb9bb6aeb388bfae35297ca0a7b302c634ff775fc5dbb7dc,PodSandboxId:708451aa5f47f55ac85c17b9d7c3b3f649812745ac41fbeca424c96282023b4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694727733033033196,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5mp8s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 729d1c2d-d3c9-4a7e-b313-4d9f827bb87c,},Annotations:map[string]string{io.kubernetes.container.hash: 44136a17,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b29556ca6f5f9661435599788301db15e7271d56a47246723c0255e62ee20,PodSandboxId:b5c32cf2c4717b687900034742cc854ef9244e173d88dc7c2f316ad658622b90,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694727592866617509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1920c797-5282-4538-b571-a26c3d4d1b76,},Annotations:map[string]string{io.kubernet
es.container.hash: c262180f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6722a3ffe86c24d05d9f1715e96acb9a0af4854e67a603d8163827c9f07518e,PodSandboxId:dc145b18a95a4a09e05d6aad7c3002bf1fe1bfa69374dcec17342320cabeff11,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694727579158158444,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-4kfkp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fc64aa4-6651-4623-8c18-167115ff4449,},Annotations:map[string]string{io.kubernetes.container.hash: 19b7281d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51db16fcebe60901e08f8d3abfde6420efeee9bef619f9d076eb2e555c47d7e,PodSandboxId:c977c5747138b8908e9e2f27f696895974ae1081189b04808d3430be94dfe64f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694727568558120073,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvr4v,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ad5b16b3-9795-43b8-ae19-bb15fa4ecc73,},Annotations:map[string]string{io.kubernetes.container.hash: a0ba0dfd,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35903bcba5206bd51c27bc99df29e0bddce59df1a5b36d7bcd1d573d6252f018,PodSandboxId:99c50d22c76dfc9801db12809356c7d3c6344b55eb4cb2cd2f308f3323babc3a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16947275
48898084879,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2ww7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39d144fb-e6f3-4345-a42e-dc44bb7e131c,},Annotations:map[string]string{io.kubernetes.container.hash: ed81a48a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb2b6b91cc5fec720d97bce3928b3c78c9bc249ffdadc93db23b63865ada079,PodSandboxId:4d2873d50aba6c3d04731118332cda7657569f83ca67d30d035f4297fb7bdf6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694727533832878277,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c6694,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c27a9bb7-8bdd-4103-89d5-338eb661d579,},Annotations:map[string]string{io.kubernetes.container.hash: 875134da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805f21f5b3fce6f425743640b5026e55f3dd52c971a228d8d593a0c4aedcf82a,PodSandboxId:a6f47293a240594b93615bcefbe929ffcc3f8dd2ad3619598cc3bc1973e89130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694727497770414670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8751287-1dc6-4334-9ede-251e676ff2bb,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1dbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb04e29d061f63c236201be7e190fd974c4ce5704309b89dbdeb711968c90f3,PodSandboxId:4ee337e1606b8803cfa481701ad1d7d0ebd95f5c8565a540bb933f45d7093786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694727492575460425,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgjkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1bf83f-fb5e-4585-9ab3-578ed9dee271,},Annotations:map[string]string{io.kubernetes.container.hash: d2e904a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5799594f48ccb62d6a98e837eaea4930031f459d533b8dba3a9d6ede9eec4ff,PodSandboxId:e1ddea7ac477bdfffee27342b69e1685175e34c62f614c6f4d24c4074b4170e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694727485750236288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jzfkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8649ca6-17a0-458f-a086-4a5c1b4d342b,},Annotations:map[string]string{io.kubernetes.container.hash: 59c40c08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:680b5c9d460585f47a011604a8875e3dba0c7f1821a7f767ee2fe829d36521e6,PodSandboxId:5faedb9796fdaba9f723e782763392669a99ae31ce667cd8b0242fefa642c45f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73
af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694727463503237593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9d5ad6f28803be1b53a0d76b1a0fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eae8d413418b6c8bad6725825734bd33089cb4b5d5b3230589d73438cd4f275,PodSandboxId:aa774c0d11ba9865da6aab319134569db9d10735df4f4e5a8130acb7b13309dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa1
2288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694727463310900061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa8835e8c343ada5199584cff5c23490,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff2c45235b9cfd022cebb31954f96c53d2ca8d6c886e59e3e4af967fc2a7a611,PodSandboxId:3d622d9dae87dcc7c2d5aac185ea114ef79e3c1037b1e32829cb296abb096e82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[s
tring]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694727463190901995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa107110a4b45f89241ef2f8fb1d2da,},Annotations:map[string]string{io.kubernetes.container.hash: f1c75d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f8ddd32f689f7f49e207cb2a74a22a2806f8ec19ab7f81b3d74d0ddff10b8d,PodSandboxId:dbe1e94fc82c76aa3cc25d46568b76eeedba27f97033d556bbff9ad377d6da80,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694727463072223266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e361d43dd4089271ee53c6d43310ad6f,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=85d22fc3-446c-488f-b8e7-4614b22e6c7f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.896564404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=583d3b88-b0d3-4cfe-8a7c-19997311a021 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.896624634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=583d3b88-b0d3-4cfe-8a7c-19997311a021 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.896901034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b0472a5eb0cc95eeb9bb6aeb388bfae35297ca0a7b302c634ff775fc5dbb7dc,PodSandboxId:708451aa5f47f55ac85c17b9d7c3b3f649812745ac41fbeca424c96282023b4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694727733033033196,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5mp8s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 729d1c2d-d3c9-4a7e-b313-4d9f827bb87c,},Annotations:map[string]string{io.kubernetes.container.hash: 44136a17,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b29556ca6f5f9661435599788301db15e7271d56a47246723c0255e62ee20,PodSandboxId:b5c32cf2c4717b687900034742cc854ef9244e173d88dc7c2f316ad658622b90,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694727592866617509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1920c797-5282-4538-b571-a26c3d4d1b76,},Annotations:map[string]string{io.kubernet
es.container.hash: c262180f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6722a3ffe86c24d05d9f1715e96acb9a0af4854e67a603d8163827c9f07518e,PodSandboxId:dc145b18a95a4a09e05d6aad7c3002bf1fe1bfa69374dcec17342320cabeff11,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694727579158158444,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-4kfkp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fc64aa4-6651-4623-8c18-167115ff4449,},Annotations:map[string]string{io.kubernetes.container.hash: 19b7281d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51db16fcebe60901e08f8d3abfde6420efeee9bef619f9d076eb2e555c47d7e,PodSandboxId:c977c5747138b8908e9e2f27f696895974ae1081189b04808d3430be94dfe64f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694727568558120073,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvr4v,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ad5b16b3-9795-43b8-ae19-bb15fa4ecc73,},Annotations:map[string]string{io.kubernetes.container.hash: a0ba0dfd,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35903bcba5206bd51c27bc99df29e0bddce59df1a5b36d7bcd1d573d6252f018,PodSandboxId:99c50d22c76dfc9801db12809356c7d3c6344b55eb4cb2cd2f308f3323babc3a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16947275
48898084879,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2ww7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39d144fb-e6f3-4345-a42e-dc44bb7e131c,},Annotations:map[string]string{io.kubernetes.container.hash: ed81a48a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb2b6b91cc5fec720d97bce3928b3c78c9bc249ffdadc93db23b63865ada079,PodSandboxId:4d2873d50aba6c3d04731118332cda7657569f83ca67d30d035f4297fb7bdf6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694727533832878277,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c6694,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c27a9bb7-8bdd-4103-89d5-338eb661d579,},Annotations:map[string]string{io.kubernetes.container.hash: 875134da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805f21f5b3fce6f425743640b5026e55f3dd52c971a228d8d593a0c4aedcf82a,PodSandboxId:a6f47293a240594b93615bcefbe929ffcc3f8dd2ad3619598cc3bc1973e89130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694727497770414670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8751287-1dc6-4334-9ede-251e676ff2bb,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1dbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb04e29d061f63c236201be7e190fd974c4ce5704309b89dbdeb711968c90f3,PodSandboxId:4ee337e1606b8803cfa481701ad1d7d0ebd95f5c8565a540bb933f45d7093786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694727492575460425,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgjkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1bf83f-fb5e-4585-9ab3-578ed9dee271,},Annotations:map[string]string{io.kubernetes.container.hash: d2e904a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5799594f48ccb62d6a98e837eaea4930031f459d533b8dba3a9d6ede9eec4ff,PodSandboxId:e1ddea7ac477bdfffee27342b69e1685175e34c62f614c6f4d24c4074b4170e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694727485750236288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jzfkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8649ca6-17a0-458f-a086-4a5c1b4d342b,},Annotations:map[string]string{io.kubernetes.container.hash: 59c40c08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:680b5c9d460585f47a011604a8875e3dba0c7f1821a7f767ee2fe829d36521e6,PodSandboxId:5faedb9796fdaba9f723e782763392669a99ae31ce667cd8b0242fefa642c45f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73
af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694727463503237593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9d5ad6f28803be1b53a0d76b1a0fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eae8d413418b6c8bad6725825734bd33089cb4b5d5b3230589d73438cd4f275,PodSandboxId:aa774c0d11ba9865da6aab319134569db9d10735df4f4e5a8130acb7b13309dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa1
2288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694727463310900061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa8835e8c343ada5199584cff5c23490,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff2c45235b9cfd022cebb31954f96c53d2ca8d6c886e59e3e4af967fc2a7a611,PodSandboxId:3d622d9dae87dcc7c2d5aac185ea114ef79e3c1037b1e32829cb296abb096e82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[s
tring]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694727463190901995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa107110a4b45f89241ef2f8fb1d2da,},Annotations:map[string]string{io.kubernetes.container.hash: f1c75d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f8ddd32f689f7f49e207cb2a74a22a2806f8ec19ab7f81b3d74d0ddff10b8d,PodSandboxId:dbe1e94fc82c76aa3cc25d46568b76eeedba27f97033d556bbff9ad377d6da80,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694727463072223266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e361d43dd4089271ee53c6d43310ad6f,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=583d3b88-b0d3-4cfe-8a7c-19997311a021 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.926775318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=54312e92-f3be-40f8-af1e-d8a0ce70cc30 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.926837926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=54312e92-f3be-40f8-af1e-d8a0ce70cc30 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.927095019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b0472a5eb0cc95eeb9bb6aeb388bfae35297ca0a7b302c634ff775fc5dbb7dc,PodSandboxId:708451aa5f47f55ac85c17b9d7c3b3f649812745ac41fbeca424c96282023b4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694727733033033196,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5mp8s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 729d1c2d-d3c9-4a7e-b313-4d9f827bb87c,},Annotations:map[string]string{io.kubernetes.container.hash: 44136a17,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b29556ca6f5f9661435599788301db15e7271d56a47246723c0255e62ee20,PodSandboxId:b5c32cf2c4717b687900034742cc854ef9244e173d88dc7c2f316ad658622b90,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694727592866617509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1920c797-5282-4538-b571-a26c3d4d1b76,},Annotations:map[string]string{io.kubernet
es.container.hash: c262180f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6722a3ffe86c24d05d9f1715e96acb9a0af4854e67a603d8163827c9f07518e,PodSandboxId:dc145b18a95a4a09e05d6aad7c3002bf1fe1bfa69374dcec17342320cabeff11,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694727579158158444,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-4kfkp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fc64aa4-6651-4623-8c18-167115ff4449,},Annotations:map[string]string{io.kubernetes.container.hash: 19b7281d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51db16fcebe60901e08f8d3abfde6420efeee9bef619f9d076eb2e555c47d7e,PodSandboxId:c977c5747138b8908e9e2f27f696895974ae1081189b04808d3430be94dfe64f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694727568558120073,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvr4v,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ad5b16b3-9795-43b8-ae19-bb15fa4ecc73,},Annotations:map[string]string{io.kubernetes.container.hash: a0ba0dfd,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35903bcba5206bd51c27bc99df29e0bddce59df1a5b36d7bcd1d573d6252f018,PodSandboxId:99c50d22c76dfc9801db12809356c7d3c6344b55eb4cb2cd2f308f3323babc3a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16947275
48898084879,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2ww7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39d144fb-e6f3-4345-a42e-dc44bb7e131c,},Annotations:map[string]string{io.kubernetes.container.hash: ed81a48a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb2b6b91cc5fec720d97bce3928b3c78c9bc249ffdadc93db23b63865ada079,PodSandboxId:4d2873d50aba6c3d04731118332cda7657569f83ca67d30d035f4297fb7bdf6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694727533832878277,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c6694,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c27a9bb7-8bdd-4103-89d5-338eb661d579,},Annotations:map[string]string{io.kubernetes.container.hash: 875134da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805f21f5b3fce6f425743640b5026e55f3dd52c971a228d8d593a0c4aedcf82a,PodSandboxId:a6f47293a240594b93615bcefbe929ffcc3f8dd2ad3619598cc3bc1973e89130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694727497770414670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8751287-1dc6-4334-9ede-251e676ff2bb,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1dbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb04e29d061f63c236201be7e190fd974c4ce5704309b89dbdeb711968c90f3,PodSandboxId:4ee337e1606b8803cfa481701ad1d7d0ebd95f5c8565a540bb933f45d7093786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694727492575460425,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgjkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1bf83f-fb5e-4585-9ab3-578ed9dee271,},Annotations:map[string]string{io.kubernetes.container.hash: d2e904a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5799594f48ccb62d6a98e837eaea4930031f459d533b8dba3a9d6ede9eec4ff,PodSandboxId:e1ddea7ac477bdfffee27342b69e1685175e34c62f614c6f4d24c4074b4170e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694727485750236288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jzfkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8649ca6-17a0-458f-a086-4a5c1b4d342b,},Annotations:map[string]string{io.kubernetes.container.hash: 59c40c08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:680b5c9d460585f47a011604a8875e3dba0c7f1821a7f767ee2fe829d36521e6,PodSandboxId:5faedb9796fdaba9f723e782763392669a99ae31ce667cd8b0242fefa642c45f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73
af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694727463503237593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9d5ad6f28803be1b53a0d76b1a0fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eae8d413418b6c8bad6725825734bd33089cb4b5d5b3230589d73438cd4f275,PodSandboxId:aa774c0d11ba9865da6aab319134569db9d10735df4f4e5a8130acb7b13309dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa1
2288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694727463310900061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa8835e8c343ada5199584cff5c23490,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff2c45235b9cfd022cebb31954f96c53d2ca8d6c886e59e3e4af967fc2a7a611,PodSandboxId:3d622d9dae87dcc7c2d5aac185ea114ef79e3c1037b1e32829cb296abb096e82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[s
tring]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694727463190901995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa107110a4b45f89241ef2f8fb1d2da,},Annotations:map[string]string{io.kubernetes.container.hash: f1c75d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f8ddd32f689f7f49e207cb2a74a22a2806f8ec19ab7f81b3d74d0ddff10b8d,PodSandboxId:dbe1e94fc82c76aa3cc25d46568b76eeedba27f97033d556bbff9ad377d6da80,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694727463072223266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e361d43dd4089271ee53c6d43310ad6f,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=54312e92-f3be-40f8-af1e-d8a0ce70cc30 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.957072331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=da423b68-497c-4091-a30c-5226f44008d3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.957141528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=da423b68-497c-4091-a30c-5226f44008d3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.957506511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b0472a5eb0cc95eeb9bb6aeb388bfae35297ca0a7b302c634ff775fc5dbb7dc,PodSandboxId:708451aa5f47f55ac85c17b9d7c3b3f649812745ac41fbeca424c96282023b4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694727733033033196,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5mp8s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 729d1c2d-d3c9-4a7e-b313-4d9f827bb87c,},Annotations:map[string]string{io.kubernetes.container.hash: 44136a17,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b29556ca6f5f9661435599788301db15e7271d56a47246723c0255e62ee20,PodSandboxId:b5c32cf2c4717b687900034742cc854ef9244e173d88dc7c2f316ad658622b90,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694727592866617509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1920c797-5282-4538-b571-a26c3d4d1b76,},Annotations:map[string]string{io.kubernet
es.container.hash: c262180f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6722a3ffe86c24d05d9f1715e96acb9a0af4854e67a603d8163827c9f07518e,PodSandboxId:dc145b18a95a4a09e05d6aad7c3002bf1fe1bfa69374dcec17342320cabeff11,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694727579158158444,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-4kfkp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fc64aa4-6651-4623-8c18-167115ff4449,},Annotations:map[string]string{io.kubernetes.container.hash: 19b7281d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51db16fcebe60901e08f8d3abfde6420efeee9bef619f9d076eb2e555c47d7e,PodSandboxId:c977c5747138b8908e9e2f27f696895974ae1081189b04808d3430be94dfe64f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694727568558120073,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvr4v,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ad5b16b3-9795-43b8-ae19-bb15fa4ecc73,},Annotations:map[string]string{io.kubernetes.container.hash: a0ba0dfd,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35903bcba5206bd51c27bc99df29e0bddce59df1a5b36d7bcd1d573d6252f018,PodSandboxId:99c50d22c76dfc9801db12809356c7d3c6344b55eb4cb2cd2f308f3323babc3a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16947275
48898084879,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2ww7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39d144fb-e6f3-4345-a42e-dc44bb7e131c,},Annotations:map[string]string{io.kubernetes.container.hash: ed81a48a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb2b6b91cc5fec720d97bce3928b3c78c9bc249ffdadc93db23b63865ada079,PodSandboxId:4d2873d50aba6c3d04731118332cda7657569f83ca67d30d035f4297fb7bdf6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694727533832878277,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c6694,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c27a9bb7-8bdd-4103-89d5-338eb661d579,},Annotations:map[string]string{io.kubernetes.container.hash: 875134da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805f21f5b3fce6f425743640b5026e55f3dd52c971a228d8d593a0c4aedcf82a,PodSandboxId:a6f47293a240594b93615bcefbe929ffcc3f8dd2ad3619598cc3bc1973e89130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694727497770414670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8751287-1dc6-4334-9ede-251e676ff2bb,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1dbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb04e29d061f63c236201be7e190fd974c4ce5704309b89dbdeb711968c90f3,PodSandboxId:4ee337e1606b8803cfa481701ad1d7d0ebd95f5c8565a540bb933f45d7093786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694727492575460425,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgjkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1bf83f-fb5e-4585-9ab3-578ed9dee271,},Annotations:map[string]string{io.kubernetes.container.hash: d2e904a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5799594f48ccb62d6a98e837eaea4930031f459d533b8dba3a9d6ede9eec4ff,PodSandboxId:e1ddea7ac477bdfffee27342b69e1685175e34c62f614c6f4d24c4074b4170e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694727485750236288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jzfkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8649ca6-17a0-458f-a086-4a5c1b4d342b,},Annotations:map[string]string{io.kubernetes.container.hash: 59c40c08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:680b5c9d460585f47a011604a8875e3dba0c7f1821a7f767ee2fe829d36521e6,PodSandboxId:5faedb9796fdaba9f723e782763392669a99ae31ce667cd8b0242fefa642c45f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73
af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694727463503237593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9d5ad6f28803be1b53a0d76b1a0fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eae8d413418b6c8bad6725825734bd33089cb4b5d5b3230589d73438cd4f275,PodSandboxId:aa774c0d11ba9865da6aab319134569db9d10735df4f4e5a8130acb7b13309dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa1
2288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694727463310900061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa8835e8c343ada5199584cff5c23490,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff2c45235b9cfd022cebb31954f96c53d2ca8d6c886e59e3e4af967fc2a7a611,PodSandboxId:3d622d9dae87dcc7c2d5aac185ea114ef79e3c1037b1e32829cb296abb096e82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[s
tring]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694727463190901995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa107110a4b45f89241ef2f8fb1d2da,},Annotations:map[string]string{io.kubernetes.container.hash: f1c75d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f8ddd32f689f7f49e207cb2a74a22a2806f8ec19ab7f81b3d74d0ddff10b8d,PodSandboxId:dbe1e94fc82c76aa3cc25d46568b76eeedba27f97033d556bbff9ad377d6da80,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694727463072223266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e361d43dd4089271ee53c6d43310ad6f,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=da423b68-497c-4091-a30c-5226f44008d3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.982887791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9a2208a6-299f-40db-bd81-76efe062999a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.982947973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9a2208a6-299f-40db-bd81-76efe062999a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:19 addons-452179 crio[714]: time="2023-09-14 21:42:19.983338366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b0472a5eb0cc95eeb9bb6aeb388bfae35297ca0a7b302c634ff775fc5dbb7dc,PodSandboxId:708451aa5f47f55ac85c17b9d7c3b3f649812745ac41fbeca424c96282023b4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694727733033033196,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5mp8s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 729d1c2d-d3c9-4a7e-b313-4d9f827bb87c,},Annotations:map[string]string{io.kubernetes.container.hash: 44136a17,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b29556ca6f5f9661435599788301db15e7271d56a47246723c0255e62ee20,PodSandboxId:b5c32cf2c4717b687900034742cc854ef9244e173d88dc7c2f316ad658622b90,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694727592866617509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1920c797-5282-4538-b571-a26c3d4d1b76,},Annotations:map[string]string{io.kubernet
es.container.hash: c262180f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6722a3ffe86c24d05d9f1715e96acb9a0af4854e67a603d8163827c9f07518e,PodSandboxId:dc145b18a95a4a09e05d6aad7c3002bf1fe1bfa69374dcec17342320cabeff11,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694727579158158444,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-4kfkp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fc64aa4-6651-4623-8c18-167115ff4449,},Annotations:map[string]string{io.kubernetes.container.hash: 19b7281d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51db16fcebe60901e08f8d3abfde6420efeee9bef619f9d076eb2e555c47d7e,PodSandboxId:c977c5747138b8908e9e2f27f696895974ae1081189b04808d3430be94dfe64f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694727568558120073,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvr4v,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ad5b16b3-9795-43b8-ae19-bb15fa4ecc73,},Annotations:map[string]string{io.kubernetes.container.hash: a0ba0dfd,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35903bcba5206bd51c27bc99df29e0bddce59df1a5b36d7bcd1d573d6252f018,PodSandboxId:99c50d22c76dfc9801db12809356c7d3c6344b55eb4cb2cd2f308f3323babc3a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16947275
48898084879,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2ww7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39d144fb-e6f3-4345-a42e-dc44bb7e131c,},Annotations:map[string]string{io.kubernetes.container.hash: ed81a48a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb2b6b91cc5fec720d97bce3928b3c78c9bc249ffdadc93db23b63865ada079,PodSandboxId:4d2873d50aba6c3d04731118332cda7657569f83ca67d30d035f4297fb7bdf6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694727533832878277,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c6694,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c27a9bb7-8bdd-4103-89d5-338eb661d579,},Annotations:map[string]string{io.kubernetes.container.hash: 875134da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805f21f5b3fce6f425743640b5026e55f3dd52c971a228d8d593a0c4aedcf82a,PodSandboxId:a6f47293a240594b93615bcefbe929ffcc3f8dd2ad3619598cc3bc1973e89130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694727497770414670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8751287-1dc6-4334-9ede-251e676ff2bb,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1dbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb04e29d061f63c236201be7e190fd974c4ce5704309b89dbdeb711968c90f3,PodSandboxId:4ee337e1606b8803cfa481701ad1d7d0ebd95f5c8565a540bb933f45d7093786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694727492575460425,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgjkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1bf83f-fb5e-4585-9ab3-578ed9dee271,},Annotations:map[string]string{io.kubernetes.container.hash: d2e904a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5799594f48ccb62d6a98e837eaea4930031f459d533b8dba3a9d6ede9eec4ff,PodSandboxId:e1ddea7ac477bdfffee27342b69e1685175e34c62f614c6f4d24c4074b4170e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694727485750236288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jzfkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8649ca6-17a0-458f-a086-4a5c1b4d342b,},Annotations:map[string]string{io.kubernetes.container.hash: 59c40c08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:680b5c9d460585f47a011604a8875e3dba0c7f1821a7f767ee2fe829d36521e6,PodSandboxId:5faedb9796fdaba9f723e782763392669a99ae31ce667cd8b0242fefa642c45f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73
af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694727463503237593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9d5ad6f28803be1b53a0d76b1a0fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eae8d413418b6c8bad6725825734bd33089cb4b5d5b3230589d73438cd4f275,PodSandboxId:aa774c0d11ba9865da6aab319134569db9d10735df4f4e5a8130acb7b13309dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa1
2288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694727463310900061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa8835e8c343ada5199584cff5c23490,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff2c45235b9cfd022cebb31954f96c53d2ca8d6c886e59e3e4af967fc2a7a611,PodSandboxId:3d622d9dae87dcc7c2d5aac185ea114ef79e3c1037b1e32829cb296abb096e82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[s
tring]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694727463190901995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa107110a4b45f89241ef2f8fb1d2da,},Annotations:map[string]string{io.kubernetes.container.hash: f1c75d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f8ddd32f689f7f49e207cb2a74a22a2806f8ec19ab7f81b3d74d0ddff10b8d,PodSandboxId:dbe1e94fc82c76aa3cc25d46568b76eeedba27f97033d556bbff9ad377d6da80,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694727463072223266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e361d43dd4089271ee53c6d43310ad6f,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9a2208a6-299f-40db-bd81-76efe062999a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:20 addons-452179 crio[714]: time="2023-09-14 21:42:20.013076059Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ec417f59-87d2-4980-b7f4-c632753fa304 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:20 addons-452179 crio[714]: time="2023-09-14 21:42:20.013147001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ec417f59-87d2-4980-b7f4-c632753fa304 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:42:20 addons-452179 crio[714]: time="2023-09-14 21:42:20.013567622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b0472a5eb0cc95eeb9bb6aeb388bfae35297ca0a7b302c634ff775fc5dbb7dc,PodSandboxId:708451aa5f47f55ac85c17b9d7c3b3f649812745ac41fbeca424c96282023b4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694727733033033196,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-5mp8s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 729d1c2d-d3c9-4a7e-b313-4d9f827bb87c,},Annotations:map[string]string{io.kubernetes.container.hash: 44136a17,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b29556ca6f5f9661435599788301db15e7271d56a47246723c0255e62ee20,PodSandboxId:b5c32cf2c4717b687900034742cc854ef9244e173d88dc7c2f316ad658622b90,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694727592866617509,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1920c797-5282-4538-b571-a26c3d4d1b76,},Annotations:map[string]string{io.kubernet
es.container.hash: c262180f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6722a3ffe86c24d05d9f1715e96acb9a0af4854e67a603d8163827c9f07518e,PodSandboxId:dc145b18a95a4a09e05d6aad7c3002bf1fe1bfa69374dcec17342320cabeff11,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694727579158158444,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-4kfkp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 2fc64aa4-6651-4623-8c18-167115ff4449,},Annotations:map[string]string{io.kubernetes.container.hash: 19b7281d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51db16fcebe60901e08f8d3abfde6420efeee9bef619f9d076eb2e555c47d7e,PodSandboxId:c977c5747138b8908e9e2f27f696895974ae1081189b04808d3430be94dfe64f,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694727568558120073,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rvr4v,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ad5b16b3-9795-43b8-ae19-bb15fa4ecc73,},Annotations:map[string]string{io.kubernetes.container.hash: a0ba0dfd,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35903bcba5206bd51c27bc99df29e0bddce59df1a5b36d7bcd1d573d6252f018,PodSandboxId:99c50d22c76dfc9801db12809356c7d3c6344b55eb4cb2cd2f308f3323babc3a,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:16947275
48898084879,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2ww7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39d144fb-e6f3-4345-a42e-dc44bb7e131c,},Annotations:map[string]string{io.kubernetes.container.hash: ed81a48a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb2b6b91cc5fec720d97bce3928b3c78c9bc249ffdadc93db23b63865ada079,PodSandboxId:4d2873d50aba6c3d04731118332cda7657569f83ca67d30d035f4297fb7bdf6d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694727533832878277,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c6694,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c27a9bb7-8bdd-4103-89d5-338eb661d579,},Annotations:map[string]string{io.kubernetes.container.hash: 875134da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805f21f5b3fce6f425743640b5026e55f3dd52c971a228d8d593a0c4aedcf82a,PodSandboxId:a6f47293a240594b93615bcefbe929ffcc3f8dd2ad3619598cc3bc1973e89130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694727497770414670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8751287-1dc6-4334-9ede-251e676ff2bb,},Annotations:map[string]string{io.kubernetes.container.hash: a5b1dbf6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb04e29d061f63c236201be7e190fd974c4ce5704309b89dbdeb711968c90f3,PodSandboxId:4ee337e1606b8803cfa481701ad1d7d0ebd95f5c8565a540bb933f45d7093786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,St
ate:CONTAINER_RUNNING,CreatedAt:1694727492575460425,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cgjkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1bf83f-fb5e-4585-9ab3-578ed9dee271,},Annotations:map[string]string{io.kubernetes.container.hash: d2e904a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5799594f48ccb62d6a98e837eaea4930031f459d533b8dba3a9d6ede9eec4ff,PodSandboxId:e1ddea7ac477bdfffee27342b69e1685175e34c62f614c6f4d24c4074b4170e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
694727485750236288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jzfkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8649ca6-17a0-458f-a086-4a5c1b4d342b,},Annotations:map[string]string{io.kubernetes.container.hash: 59c40c08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:680b5c9d460585f47a011604a8875e3dba0c7f1821a7f767ee2fe829d36521e6,PodSandboxId:5faedb9796fdaba9f723e782763392669a99ae31ce667cd8b0242fefa642c45f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73
af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694727463503237593,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd9d5ad6f28803be1b53a0d76b1a0fa2,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eae8d413418b6c8bad6725825734bd33089cb4b5d5b3230589d73438cd4f275,PodSandboxId:aa774c0d11ba9865da6aab319134569db9d10735df4f4e5a8130acb7b13309dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa1
2288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694727463310900061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa8835e8c343ada5199584cff5c23490,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff2c45235b9cfd022cebb31954f96c53d2ca8d6c886e59e3e4af967fc2a7a611,PodSandboxId:3d622d9dae87dcc7c2d5aac185ea114ef79e3c1037b1e32829cb296abb096e82,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[s
tring]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694727463190901995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa107110a4b45f89241ef2f8fb1d2da,},Annotations:map[string]string{io.kubernetes.container.hash: f1c75d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f8ddd32f689f7f49e207cb2a74a22a2806f8ec19ab7f81b3d74d0ddff10b8d,PodSandboxId:dbe1e94fc82c76aa3cc25d46568b76eeedba27f97033d556bbff9ad377d6da80,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k
8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694727463072223266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-452179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e361d43dd4089271ee53c6d43310ad6f,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ec417f59-87d2-4980-b7f4-c632753fa304 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID
	7b0472a5eb0cc       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb                      7 seconds ago       Running             hello-world-app           0                   708451aa5f47f
	987b29556ca6f       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                              2 minutes ago       Running             nginx                     0                   b5c32cf2c4717
	b6722a3ffe86c       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552                        2 minutes ago       Running             headlamp                  0                   dc145b18a95a4
	f51db16fcebe6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   c977c5747138b
	35903bcba5206       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                             3 minutes ago       Exited              patch                     2                   99c50d22c76df
	0cb2b6b91cc5f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   4d2873d50aba6
	805f21f5b3fce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   a6f47293a2405
	efb04e29d061f       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                                             4 minutes ago       Running             kube-proxy                0                   4ee337e1606b8
	f5799594f48cc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   e1ddea7ac477b
	680b5c9d46058       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                                             4 minutes ago       Running             kube-scheduler            0                   5faedb9796fda
	2eae8d413418b       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                                             4 minutes ago       Running             kube-apiserver            0                   aa774c0d11ba9
	ff2c45235b9cf       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   3d622d9dae87d
	55f8ddd32f689       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                                             4 minutes ago       Running             kube-controller-manager   0                   dbe1e94fc82c7
	
	* 
	* ==> coredns [f5799594f48ccb62d6a98e837eaea4930031f459d533b8dba3a9d6ede9eec4ff] <==
	* [INFO] 10.244.0.8:48354 - 15651 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000132513s
	[INFO] 10.244.0.8:35644 - 29452 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073969s
	[INFO] 10.244.0.8:35644 - 61710 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00018742s
	[INFO] 10.244.0.8:60169 - 48199 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041412s
	[INFO] 10.244.0.8:60169 - 41801 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000044204s
	[INFO] 10.244.0.8:45020 - 43849 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000142159s
	[INFO] 10.244.0.8:45020 - 56143 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000237252s
	[INFO] 10.244.0.8:53839 - 27481 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00013158s
	[INFO] 10.244.0.8:53839 - 40548 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000081226s
	[INFO] 10.244.0.8:48802 - 20654 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124691s
	[INFO] 10.244.0.8:48802 - 23208 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00030822s
	[INFO] 10.244.0.8:37057 - 11980 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.001949714s
	[INFO] 10.244.0.8:37057 - 13770 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099861s
	[INFO] 10.244.0.8:59513 - 13298 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00004059s
	[INFO] 10.244.0.8:59513 - 21489 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000095278s
	[INFO] 10.244.0.19:40730 - 17524 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000452036s
	[INFO] 10.244.0.19:35317 - 32178 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000135467s
	[INFO] 10.244.0.19:48627 - 47591 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104942s
	[INFO] 10.244.0.19:39923 - 27550 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000069257s
	[INFO] 10.244.0.19:58281 - 10798 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000211616s
	[INFO] 10.244.0.19:38244 - 42630 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064378s
	[INFO] 10.244.0.19:46937 - 15117 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000915476s
	[INFO] 10.244.0.19:59323 - 21780 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000586779s
	[INFO] 10.244.0.21:54164 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000209057s
	[INFO] 10.244.0.21:53075 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091865s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-452179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-452179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=addons-452179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T21_37_50_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-452179
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 21:37:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-452179
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 21:42:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 21:40:53 +0000   Thu, 14 Sep 2023 21:37:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 21:40:53 +0000   Thu, 14 Sep 2023 21:37:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 21:40:53 +0000   Thu, 14 Sep 2023 21:37:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 21:40:53 +0000   Thu, 14 Sep 2023 21:37:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    addons-452179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 3306c165b1674a85a5d7379729214ad3
	  System UUID:                3306c165-b167-4a85-a5d7-379729214ad3
	  Boot ID:                    a4ae6f24-c5cb-4661-a5c8-4789bb227063
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-5mp8s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-d4c87556c-rvr4v                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  headlamp                    headlamp-699c48fb74-4kfkp                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 coredns-5dd5756b68-jzfkt                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m18s
	  kube-system                 etcd-addons-452179                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-apiserver-addons-452179             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-controller-manager-addons-452179    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-proxy-cgjkd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-scheduler-addons-452179             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m38s (x8 over 4m38s)  kubelet          Node addons-452179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s (x8 over 4m38s)  kubelet          Node addons-452179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s (x7 over 4m38s)  kubelet          Node addons-452179 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m30s                  kubelet          Node addons-452179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s                  kubelet          Node addons-452179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s                  kubelet          Node addons-452179 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m30s                  kubelet          Node addons-452179 status is now: NodeReady
	  Normal  RegisteredNode           4m19s                  node-controller  Node addons-452179 event: Registered Node addons-452179 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.137655] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.946518] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.595987] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.103345] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.132367] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.097022] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.194401] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +9.226676] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[  +8.246230] systemd-fstab-generator[1240]: Ignoring "noauto" for root device
	[Sep14 21:38] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.946347] kauditd_printk_skb: 59 callbacks suppressed
	[ +19.470690] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.705074] kauditd_printk_skb: 16 callbacks suppressed
	[Sep14 21:39] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.502108] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.726001] kauditd_printk_skb: 1 callbacks suppressed
	[ +11.682132] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.431747] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.149725] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.257077] kauditd_printk_skb: 9 callbacks suppressed
	[Sep14 21:41] kauditd_printk_skb: 12 callbacks suppressed
	[Sep14 21:42] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [ff2c45235b9cfd022cebb31954f96c53d2ca8d6c886e59e3e4af967fc2a7a611] <==
	* {"level":"info","ts":"2023-09-14T21:38:50.183831Z","caller":"traceutil/trace.go:171","msg":"trace[954334617] transaction","detail":"{read_only:false; response_revision:914; number_of_response:1; }","duration":"176.675131ms","start":"2023-09-14T21:38:50.007144Z","end":"2023-09-14T21:38:50.183819Z","steps":["trace[954334617] 'process raft request'  (duration: 176.623436ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T21:38:50.184064Z","caller":"traceutil/trace.go:171","msg":"trace[101717316] transaction","detail":"{read_only:false; response_revision:913; number_of_response:1; }","duration":"321.606531ms","start":"2023-09-14T21:38:49.862443Z","end":"2023-09-14T21:38:50.18405Z","steps":["trace[101717316] 'process raft request'  (duration: 287.800339ms)","trace[101717316] 'compare'  (duration: 33.289725ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T21:38:50.186679Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T21:38:49.862179Z","time spent":"324.453377ms","remote":"127.0.0.1:50838","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5357,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/metrics-server\" mod_revision:522 > success:<request_put:<key:\"/registry/deployments/kube-system/metrics-server\" value_size:5301 >> failure:<request_range:<key:\"/registry/deployments/kube-system/metrics-server\" > >"}
	{"level":"info","ts":"2023-09-14T21:38:50.184206Z","caller":"traceutil/trace.go:171","msg":"trace[1930864178] linearizableReadLoop","detail":"{readStateIndex:938; appliedIndex:937; }","duration":"277.533411ms","start":"2023-09-14T21:38:49.906665Z","end":"2023-09-14T21:38:50.184199Z","steps":["trace[1930864178] 'read index received'  (duration: 243.584635ms)","trace[1930864178] 'applied index is now lower than readState.Index'  (duration: 33.948118ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T21:38:50.184324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.630859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-14T21:38:50.186904Z","caller":"traceutil/trace.go:171","msg":"trace[1832821691] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:914; }","duration":"280.278707ms","start":"2023-09-14T21:38:49.906614Z","end":"2023-09-14T21:38:50.186893Z","steps":["trace[1832821691] 'agreement among raft nodes before linearized reading'  (duration: 277.615143ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T21:38:50.187102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.959032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:77946"}
	{"level":"info","ts":"2023-09-14T21:38:50.187235Z","caller":"traceutil/trace.go:171","msg":"trace[506628521] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:914; }","duration":"260.088838ms","start":"2023-09-14T21:38:49.927133Z","end":"2023-09-14T21:38:50.187222Z","steps":["trace[506628521] 'agreement among raft nodes before linearized reading'  (duration: 259.877605ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T21:38:50.187805Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.279742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-09-14T21:38:50.188387Z","caller":"traceutil/trace.go:171","msg":"trace[1088445838] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:914; }","duration":"160.862312ms","start":"2023-09-14T21:38:50.027514Z","end":"2023-09-14T21:38:50.188377Z","steps":["trace[1088445838] 'agreement among raft nodes before linearized reading'  (duration: 160.251224ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T21:39:02.436918Z","caller":"traceutil/trace.go:171","msg":"trace[1533821860] transaction","detail":"{read_only:false; response_revision:999; number_of_response:1; }","duration":"154.993469ms","start":"2023-09-14T21:39:02.281905Z","end":"2023-09-14T21:39:02.436899Z","steps":["trace[1533821860] 'process raft request'  (duration: 153.957053ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T21:39:12.380071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.16639ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-09-14T21:39:12.381984Z","caller":"traceutil/trace.go:171","msg":"trace[1975289109] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1058; }","duration":"307.158746ms","start":"2023-09-14T21:39:12.074791Z","end":"2023-09-14T21:39:12.381949Z","steps":["trace[1975289109] 'count revisions from in-memory index tree'  (duration: 305.085918ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T21:39:12.382037Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T21:39:12.074778Z","time spent":"307.243025ms","remote":"127.0.0.1:50778","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":11,"response size":30,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true "}
	{"level":"warn","ts":"2023-09-14T21:39:12.38051Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.345733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13836"}
	{"level":"info","ts":"2023-09-14T21:39:12.382435Z","caller":"traceutil/trace.go:171","msg":"trace[1376405305] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1058; }","duration":"235.23653ms","start":"2023-09-14T21:39:12.147149Z","end":"2023-09-14T21:39:12.382386Z","steps":["trace[1376405305] 'range keys from in-memory index tree'  (duration: 233.259305ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T21:39:12.38055Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.982397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10928"}
	{"level":"info","ts":"2023-09-14T21:39:12.382561Z","caller":"traceutil/trace.go:171","msg":"trace[916597018] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1058; }","duration":"191.993277ms","start":"2023-09-14T21:39:12.190563Z","end":"2023-09-14T21:39:12.382556Z","steps":["trace[916597018] 'range keys from in-memory index tree'  (duration: 189.909958ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T21:39:22.74907Z","caller":"traceutil/trace.go:171","msg":"trace[71943896] linearizableReadLoop","detail":"{readStateIndex:1125; appliedIndex:1124; }","duration":"101.197749ms","start":"2023-09-14T21:39:22.647856Z","end":"2023-09-14T21:39:22.749054Z","steps":["trace[71943896] 'read index received'  (duration: 101.052417ms)","trace[71943896] 'applied index is now lower than readState.Index'  (duration: 144.453µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T21:39:22.749396Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.547848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13841"}
	{"level":"info","ts":"2023-09-14T21:39:22.749504Z","caller":"traceutil/trace.go:171","msg":"trace[475873848] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1092; }","duration":"101.66931ms","start":"2023-09-14T21:39:22.647826Z","end":"2023-09-14T21:39:22.749495Z","steps":["trace[475873848] 'agreement among raft nodes before linearized reading'  (duration: 101.412839ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T21:39:22.749643Z","caller":"traceutil/trace.go:171","msg":"trace[17968367] transaction","detail":"{read_only:false; response_revision:1092; number_of_response:1; }","duration":"185.094676ms","start":"2023-09-14T21:39:22.56452Z","end":"2023-09-14T21:39:22.749614Z","steps":["trace[17968367] 'process raft request'  (duration: 184.421925ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T21:39:52.62218Z","caller":"traceutil/trace.go:171","msg":"trace[1611977548] transaction","detail":"{read_only:false; response_revision:1354; number_of_response:1; }","duration":"294.08099ms","start":"2023-09-14T21:39:52.328059Z","end":"2023-09-14T21:39:52.62214Z","steps":["trace[1611977548] 'process raft request'  (duration: 293.914341ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T21:39:52.777094Z","caller":"traceutil/trace.go:171","msg":"trace[217140254] transaction","detail":"{read_only:false; response_revision:1355; number_of_response:1; }","duration":"187.604461ms","start":"2023-09-14T21:39:52.589471Z","end":"2023-09-14T21:39:52.777075Z","steps":["trace[217140254] 'process raft request'  (duration: 187.373154ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T21:40:32.871913Z","caller":"traceutil/trace.go:171","msg":"trace[174993740] transaction","detail":"{read_only:false; response_revision:1436; number_of_response:1; }","duration":"150.340118ms","start":"2023-09-14T21:40:32.721546Z","end":"2023-09-14T21:40:32.871886Z","steps":["trace[174993740] 'process raft request'  (duration: 150.065021ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [f51db16fcebe60901e08f8d3abfde6420efeee9bef619f9d076eb2e555c47d7e] <==
	* 2023/09/14 21:39:28 GCP Auth Webhook started!
	2023/09/14 21:39:31 Ready to marshal response ...
	2023/09/14 21:39:31 Ready to write response ...
	2023/09/14 21:39:31 Ready to marshal response ...
	2023/09/14 21:39:31 Ready to write response ...
	2023/09/14 21:39:31 Ready to marshal response ...
	2023/09/14 21:39:31 Ready to write response ...
	2023/09/14 21:39:39 Ready to marshal response ...
	2023/09/14 21:39:39 Ready to write response ...
	2023/09/14 21:39:41 Ready to marshal response ...
	2023/09/14 21:39:41 Ready to write response ...
	2023/09/14 21:39:46 Ready to marshal response ...
	2023/09/14 21:39:46 Ready to write response ...
	2023/09/14 21:40:18 Ready to marshal response ...
	2023/09/14 21:40:18 Ready to write response ...
	2023/09/14 21:40:48 Ready to marshal response ...
	2023/09/14 21:40:48 Ready to write response ...
	2023/09/14 21:42:09 Ready to marshal response ...
	2023/09/14 21:42:09 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:42:20 up 5 min,  0 users,  load average: 0.46, 1.24, 0.66
	Linux addons-452179 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2eae8d413418b6c8bad6725825734bd33089cb4b5d5b3230589d73438cd4f275] <==
	* I0914 21:40:50.955203       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 21:41:06.998927       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 21:41:06.999003       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 21:41:07.012150       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 21:41:07.012222       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 21:41:07.032123       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 21:41:07.032225       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 21:41:07.048249       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 21:41:07.049380       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 21:41:07.060789       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 21:41:07.060892       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 21:41:07.067634       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 21:41:07.067698       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 21:41:07.085170       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 21:41:07.085380       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 21:41:07.110494       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 21:41:07.110682       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0914 21:41:07.119407       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0914 21:41:07.119492       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0914 21:41:07.123446       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0914 21:41:07.123531       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	W0914 21:41:08.049374       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0914 21:41:08.110784       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 21:41:08.115055       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0914 21:42:09.798876       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.100.118"}
	
	* 
	* ==> kube-controller-manager [55f8ddd32f689f7f49e207cb2a74a22a2806f8ec19ab7f81b3d74d0ddff10b8d] <==
	* I0914 21:41:31.878052       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0914 21:41:31.878239       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 21:41:32.220199       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0914 21:41:32.220237       1 shared_informer.go:318] Caches are synced for garbage collector
	W0914 21:41:37.290483       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 21:41:37.290583       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0914 21:41:39.425773       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 21:41:39.425887       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0914 21:41:39.925264       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 21:41:39.925359       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0914 21:41:49.198918       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 21:41:49.198953       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0914 21:42:09.526006       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0914 21:42:09.568969       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-5mp8s"
	I0914 21:42:09.585641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="60.356865ms"
	I0914 21:42:09.602924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="17.137701ms"
	I0914 21:42:09.603704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.072µs"
	I0914 21:42:09.608851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="107.443µs"
	I0914 21:42:12.169380       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0914 21:42:12.172725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="48.977µs"
	I0914 21:42:12.177881       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0914 21:42:13.570739       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="11.504155ms"
	I0914 21:42:13.570836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.613µs"
	W0914 21:42:15.263798       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 21:42:15.263858       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [efb04e29d061f63c236201be7e190fd974c4ce5704309b89dbdeb711968c90f3] <==
	* I0914 21:38:16.159133       1 server_others.go:69] "Using iptables proxy"
	I0914 21:38:16.229374       1 node.go:141] Successfully retrieved node IP: 192.168.39.45
	I0914 21:38:16.642438       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 21:38:16.642547       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 21:38:16.662870       1 server_others.go:152] "Using iptables Proxier"
	I0914 21:38:16.663123       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 21:38:16.664570       1 server.go:846] "Version info" version="v1.28.1"
	I0914 21:38:16.664659       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 21:38:16.672639       1 config.go:188] "Starting service config controller"
	I0914 21:38:16.672716       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 21:38:16.672741       1 config.go:97] "Starting endpoint slice config controller"
	I0914 21:38:16.672745       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 21:38:16.690862       1 config.go:315] "Starting node config controller"
	I0914 21:38:16.691505       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 21:38:16.774690       1 shared_informer.go:318] Caches are synced for service config
	I0914 21:38:16.780711       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 21:38:16.791982       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [680b5c9d460585f47a011604a8875e3dba0c7f1821a7f767ee2fe829d36521e6] <==
	* W0914 21:37:46.736376       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 21:37:46.736693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 21:37:46.736519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 21:37:46.736831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0914 21:37:47.554878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:37:47.554903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0914 21:37:47.564348       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 21:37:47.564392       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 21:37:47.638724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:37:47.638773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:37:47.656214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 21:37:47.656261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 21:37:47.712967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 21:37:47.713047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 21:37:47.774623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 21:37:47.774670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 21:37:47.830368       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 21:37:47.830414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 21:37:47.846663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 21:37:47.846710       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 21:37:47.864597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:37:47.864640       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 21:37:47.940938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 21:37:47.941008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0914 21:37:49.918931       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 21:37:16 UTC, ends at Thu 2023-09-14 21:42:20 UTC. --
	Sep 14 21:42:09 addons-452179 kubelet[1247]: I0914 21:42:09.585149    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="c3088cde-8d05-45e6-a95c-bba2fa7fdece" containerName="csi-provisioner"
	Sep 14 21:42:09 addons-452179 kubelet[1247]: I0914 21:42:09.585155    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="c3088cde-8d05-45e6-a95c-bba2fa7fdece" containerName="liveness-probe"
	Sep 14 21:42:09 addons-452179 kubelet[1247]: I0914 21:42:09.717116    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/729d1c2d-d3c9-4a7e-b313-4d9f827bb87c-gcp-creds\") pod \"hello-world-app-5d77478584-5mp8s\" (UID: \"729d1c2d-d3c9-4a7e-b313-4d9f827bb87c\") " pod="default/hello-world-app-5d77478584-5mp8s"
	Sep 14 21:42:09 addons-452179 kubelet[1247]: I0914 21:42:09.717173    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtr2g\" (UniqueName: \"kubernetes.io/projected/729d1c2d-d3c9-4a7e-b313-4d9f827bb87c-kube-api-access-xtr2g\") pod \"hello-world-app-5d77478584-5mp8s\" (UID: \"729d1c2d-d3c9-4a7e-b313-4d9f827bb87c\") " pod="default/hello-world-app-5d77478584-5mp8s"
	Sep 14 21:42:10 addons-452179 kubelet[1247]: I0914 21:42:10.923795    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzwd2\" (UniqueName: \"kubernetes.io/projected/1247df4f-da4c-4014-984e-f43e4db830c3-kube-api-access-fzwd2\") pod \"1247df4f-da4c-4014-984e-f43e4db830c3\" (UID: \"1247df4f-da4c-4014-984e-f43e4db830c3\") "
	Sep 14 21:42:10 addons-452179 kubelet[1247]: I0914 21:42:10.927790    1247 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1247df4f-da4c-4014-984e-f43e4db830c3-kube-api-access-fzwd2" (OuterVolumeSpecName: "kube-api-access-fzwd2") pod "1247df4f-da4c-4014-984e-f43e4db830c3" (UID: "1247df4f-da4c-4014-984e-f43e4db830c3"). InnerVolumeSpecName "kube-api-access-fzwd2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 21:42:11 addons-452179 kubelet[1247]: I0914 21:42:11.024775    1247 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fzwd2\" (UniqueName: \"kubernetes.io/projected/1247df4f-da4c-4014-984e-f43e4db830c3-kube-api-access-fzwd2\") on node \"addons-452179\" DevicePath \"\""
	Sep 14 21:42:11 addons-452179 kubelet[1247]: I0914 21:42:11.530733    1247 scope.go:117] "RemoveContainer" containerID="60e3503a43d10f5df74e98ca7ddd2420f071db8605ac496f68c70662a6eb3f6e"
	Sep 14 21:42:11 addons-452179 kubelet[1247]: I0914 21:42:11.580661    1247 scope.go:117] "RemoveContainer" containerID="60e3503a43d10f5df74e98ca7ddd2420f071db8605ac496f68c70662a6eb3f6e"
	Sep 14 21:42:11 addons-452179 kubelet[1247]: E0914 21:42:11.581436    1247 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60e3503a43d10f5df74e98ca7ddd2420f071db8605ac496f68c70662a6eb3f6e\": container with ID starting with 60e3503a43d10f5df74e98ca7ddd2420f071db8605ac496f68c70662a6eb3f6e not found: ID does not exist" containerID="60e3503a43d10f5df74e98ca7ddd2420f071db8605ac496f68c70662a6eb3f6e"
	Sep 14 21:42:11 addons-452179 kubelet[1247]: I0914 21:42:11.581536    1247 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60e3503a43d10f5df74e98ca7ddd2420f071db8605ac496f68c70662a6eb3f6e"} err="failed to get container status \"60e3503a43d10f5df74e98ca7ddd2420f071db8605ac496f68c70662a6eb3f6e\": rpc error: code = NotFound desc = could not find container \"60e3503a43d10f5df74e98ca7ddd2420f071db8605ac496f68c70662a6eb3f6e\": container with ID starting with 60e3503a43d10f5df74e98ca7ddd2420f071db8605ac496f68c70662a6eb3f6e not found: ID does not exist"
	Sep 14 21:42:12 addons-452179 kubelet[1247]: I0914 21:42:12.145125    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1247df4f-da4c-4014-984e-f43e4db830c3" path="/var/lib/kubelet/pods/1247df4f-da4c-4014-984e-f43e4db830c3/volumes"
	Sep 14 21:42:14 addons-452179 kubelet[1247]: I0914 21:42:14.145126    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="39d144fb-e6f3-4345-a42e-dc44bb7e131c" path="/var/lib/kubelet/pods/39d144fb-e6f3-4345-a42e-dc44bb7e131c/volumes"
	Sep 14 21:42:14 addons-452179 kubelet[1247]: I0914 21:42:14.145671    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c27a9bb7-8bdd-4103-89d5-338eb661d579" path="/var/lib/kubelet/pods/c27a9bb7-8bdd-4103-89d5-338eb661d579/volumes"
	Sep 14 21:42:15 addons-452179 kubelet[1247]: I0914 21:42:15.555591    1247 scope.go:117] "RemoveContainer" containerID="a5b96a0c7413c3daf07e6242f5007638cd79c49aa762bd547f594405d485ac2e"
	Sep 14 21:42:15 addons-452179 kubelet[1247]: I0914 21:42:15.557598    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pb7p\" (UniqueName: \"kubernetes.io/projected/3e61032e-e84a-40d9-a998-71542052a973-kube-api-access-8pb7p\") pod \"3e61032e-e84a-40d9-a998-71542052a973\" (UID: \"3e61032e-e84a-40d9-a998-71542052a973\") "
	Sep 14 21:42:15 addons-452179 kubelet[1247]: I0914 21:42:15.557633    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e61032e-e84a-40d9-a998-71542052a973-webhook-cert\") pod \"3e61032e-e84a-40d9-a998-71542052a973\" (UID: \"3e61032e-e84a-40d9-a998-71542052a973\") "
	Sep 14 21:42:15 addons-452179 kubelet[1247]: I0914 21:42:15.565376    1247 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e61032e-e84a-40d9-a998-71542052a973-kube-api-access-8pb7p" (OuterVolumeSpecName: "kube-api-access-8pb7p") pod "3e61032e-e84a-40d9-a998-71542052a973" (UID: "3e61032e-e84a-40d9-a998-71542052a973"). InnerVolumeSpecName "kube-api-access-8pb7p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 21:42:15 addons-452179 kubelet[1247]: I0914 21:42:15.567994    1247 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e61032e-e84a-40d9-a998-71542052a973-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3e61032e-e84a-40d9-a998-71542052a973" (UID: "3e61032e-e84a-40d9-a998-71542052a973"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 21:42:15 addons-452179 kubelet[1247]: I0914 21:42:15.590191    1247 scope.go:117] "RemoveContainer" containerID="a5b96a0c7413c3daf07e6242f5007638cd79c49aa762bd547f594405d485ac2e"
	Sep 14 21:42:15 addons-452179 kubelet[1247]: E0914 21:42:15.590723    1247 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5b96a0c7413c3daf07e6242f5007638cd79c49aa762bd547f594405d485ac2e\": container with ID starting with a5b96a0c7413c3daf07e6242f5007638cd79c49aa762bd547f594405d485ac2e not found: ID does not exist" containerID="a5b96a0c7413c3daf07e6242f5007638cd79c49aa762bd547f594405d485ac2e"
	Sep 14 21:42:15 addons-452179 kubelet[1247]: I0914 21:42:15.590791    1247 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5b96a0c7413c3daf07e6242f5007638cd79c49aa762bd547f594405d485ac2e"} err="failed to get container status \"a5b96a0c7413c3daf07e6242f5007638cd79c49aa762bd547f594405d485ac2e\": rpc error: code = NotFound desc = could not find container \"a5b96a0c7413c3daf07e6242f5007638cd79c49aa762bd547f594405d485ac2e\": container with ID starting with a5b96a0c7413c3daf07e6242f5007638cd79c49aa762bd547f594405d485ac2e not found: ID does not exist"
	Sep 14 21:42:15 addons-452179 kubelet[1247]: I0914 21:42:15.657939    1247 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e61032e-e84a-40d9-a998-71542052a973-webhook-cert\") on node \"addons-452179\" DevicePath \"\""
	Sep 14 21:42:15 addons-452179 kubelet[1247]: I0914 21:42:15.658025    1247 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8pb7p\" (UniqueName: \"kubernetes.io/projected/3e61032e-e84a-40d9-a998-71542052a973-kube-api-access-8pb7p\") on node \"addons-452179\" DevicePath \"\""
	Sep 14 21:42:16 addons-452179 kubelet[1247]: I0914 21:42:16.145707    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3e61032e-e84a-40d9-a998-71542052a973" path="/var/lib/kubelet/pods/3e61032e-e84a-40d9-a998-71542052a973/volumes"
	
	* 
	* ==> storage-provisioner [805f21f5b3fce6f425743640b5026e55f3dd52c971a228d8d593a0c4aedcf82a] <==
	* I0914 21:38:19.009764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 21:38:19.208259       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 21:38:19.213484       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 21:38:19.273443       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 21:38:19.281210       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-452179_8cc77d12-ec6c-4f29-9405-22be97571bc1!
	I0914 21:38:19.273539       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a38987a9-8053-4ba4-8c6e-38817af69779", APIVersion:"v1", ResourceVersion:"813", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-452179_8cc77d12-ec6c-4f29-9405-22be97571bc1 became leader
	I0914 21:38:19.398512       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-452179_8cc77d12-ec6c-4f29-9405-22be97571bc1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-452179 -n addons-452179
helpers_test.go:261: (dbg) Run:  kubectl --context addons-452179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-452179
addons_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-452179: exit status 82 (2m1.496846307s)

                                                
                                                
-- stdout --
	* Stopping node "addons-452179"  ...
	* Stopping node "addons-452179"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:150: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-452179" : exit status 82
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-452179
addons_test.go:152: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-452179: exit status 11 (21.653128819s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-452179" : exit status 11
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-452179
addons_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-452179: exit status 11 (6.143916272s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-452179" : exit status 11
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-452179
addons_test.go:161: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-452179: exit status 11 (6.14314029s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-452179" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 logs --file /tmp/TestFunctionalserialLogsFileCmd1986179541/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 logs --file /tmp/TestFunctionalserialLogsFileCmd1986179541/001/logs.txt: (1.27176214s)
functional_test.go:1251: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 21:48:26.301840   18876 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17243-6287/.minikube/logs/lastStart.txt: bufio.Scanner: token too long
	E0914 21:48:26.908642   18876 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 c285467e702d7aa6b107095d3289531a0969ba6a4473a3f23ef76c52df6ec5f9" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 c285467e702d7aa6b107095d3289531a0969ba6a4473a3f23ef76c52df6ec5f9": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-09-14T21:48:26Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-5dd5756b68-2j2t2_880fbaea-8ab9-46c1-b89f-4cc2079d3ea7/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-5dd5756b68-2j2t2_880fbaea-8ab9-46c1-b89f-4cc2079d3ea7: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2023-09-14T21:48:26Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_coredns-5dd5756b68-2j2t2_880fbaea-8ab9-46c1-b89f-4cc2079d3ea7/coredns/0.log\\\": lstat /var/log/pods/kube-system_coredns-5dd5756b68-2j2t2_880fbaea-8ab9-46c1-b89f-4cc2079d3ea7: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: coredns [c285467e702d7aa6b107095d3289531a0969ba6a4473a3f23ef76c52df6ec5f9]

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (175.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-235631 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-235631 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.761734368s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-235631 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-235631 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f6f0e50f-6590-4f9a-9bf3-1aec235e7747] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f6f0e50f-6590-4f9a-9bf3-1aec235e7747] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.009148392s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-235631 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0914 21:52:13.608203   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:53:32.190479   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:32.195739   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:32.205971   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:32.226245   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:32.266568   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:32.346918   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:32.507357   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:32.827956   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:33.468912   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:34.749502   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:37.310340   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:42.431547   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:53:52.672560   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-235631 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.217701059s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-235631 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-235631 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.250
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-235631 addons disable ingress-dns --alsologtostderr -v=1
E0914 21:54:13.153508   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-235631 addons disable ingress-dns --alsologtostderr -v=1: (13.417725174s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-235631 addons disable ingress --alsologtostderr -v=1
E0914 21:54:29.764857   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-235631 addons disable ingress --alsologtostderr -v=1: (7.500788637s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-235631 -n ingress-addon-legacy-235631
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-235631 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-337253                                                   | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3207744311/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-337253 ssh findmnt                                          | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-337253                                                   | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3207744311/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| update-context | functional-337253                                                      | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-337253                                                      | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-337253                                                      | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-337253                                                      | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-337253 ssh findmnt                                          | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-337253 ssh findmnt                                          | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| image          | functional-337253                                                      | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-337253 ssh findmnt                                          | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| ssh            | functional-337253 ssh pgrep                                            | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| mount          | -p functional-337253                                                   | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| image          | functional-337253 image build -t                                       | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	|                | localhost/my-image:functional-337253                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-337253                                                      | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-337253                                                      | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-337253 image ls                                             | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	| delete         | -p functional-337253                                                   | functional-337253           | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:49 UTC |
	| start          | -p ingress-addon-legacy-235631                                         | ingress-addon-legacy-235631 | jenkins | v1.31.2 | 14 Sep 23 21:49 UTC | 14 Sep 23 21:51 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                     |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-235631                                            | ingress-addon-legacy-235631 | jenkins | v1.31.2 | 14 Sep 23 21:51 UTC | 14 Sep 23 21:51 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-235631                                            | ingress-addon-legacy-235631 | jenkins | v1.31.2 | 14 Sep 23 21:51 UTC | 14 Sep 23 21:51 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-235631                                            | ingress-addon-legacy-235631 | jenkins | v1.31.2 | 14 Sep 23 21:51 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-235631 ip                                         | ingress-addon-legacy-235631 | jenkins | v1.31.2 | 14 Sep 23 21:54 UTC | 14 Sep 23 21:54 UTC |
	| addons         | ingress-addon-legacy-235631                                            | ingress-addon-legacy-235631 | jenkins | v1.31.2 | 14 Sep 23 21:54 UTC | 14 Sep 23 21:54 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-235631                                            | ingress-addon-legacy-235631 | jenkins | v1.31.2 | 14 Sep 23 21:54 UTC | 14 Sep 23 21:54 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 21:49:28
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 21:49:28.565774   21723 out.go:296] Setting OutFile to fd 1 ...
	I0914 21:49:28.566140   21723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:49:28.566160   21723 out.go:309] Setting ErrFile to fd 2...
	I0914 21:49:28.566168   21723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:49:28.566630   21723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 21:49:28.567562   21723 out.go:303] Setting JSON to false
	I0914 21:49:28.568345   21723 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1911,"bootTime":1694726258,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 21:49:28.568404   21723 start.go:138] virtualization: kvm guest
	I0914 21:49:28.570372   21723 out.go:177] * [ingress-addon-legacy-235631] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 21:49:28.572332   21723 notify.go:220] Checking for updates...
	I0914 21:49:28.572339   21723 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 21:49:28.573878   21723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 21:49:28.575214   21723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:49:28.576447   21723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:49:28.577725   21723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 21:49:28.578931   21723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 21:49:28.580451   21723 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 21:49:28.614392   21723 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 21:49:28.615663   21723 start.go:298] selected driver: kvm2
	I0914 21:49:28.615678   21723 start.go:902] validating driver "kvm2" against <nil>
	I0914 21:49:28.615690   21723 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 21:49:28.616334   21723 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 21:49:28.616399   21723 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 21:49:28.629816   21723 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 21:49:28.629874   21723 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 21:49:28.630091   21723 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 21:49:28.630135   21723 cni.go:84] Creating CNI manager for ""
	I0914 21:49:28.630155   21723 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 21:49:28.630168   21723 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 21:49:28.630179   21723 start_flags.go:321] config:
	{Name:ingress-addon-legacy-235631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-235631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:49:28.630342   21723 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 21:49:28.632225   21723 out.go:177] * Starting control plane node ingress-addon-legacy-235631 in cluster ingress-addon-legacy-235631
	I0914 21:49:28.633590   21723 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 21:49:29.068564   21723 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0914 21:49:29.068606   21723 cache.go:57] Caching tarball of preloaded images
	I0914 21:49:29.068803   21723 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 21:49:29.070760   21723 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0914 21:49:29.072186   21723 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0914 21:49:29.174616   21723 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0914 21:49:45.052651   21723 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0914 21:49:45.052742   21723 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0914 21:49:46.029549   21723 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0914 21:49:46.029941   21723 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/config.json ...
	I0914 21:49:46.029978   21723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/config.json: {Name:mkea1758cb67eb31e379aad125a6a32392a15e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:49:46.030158   21723 start.go:365] acquiring machines lock for ingress-addon-legacy-235631: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 21:49:46.030196   21723 start.go:369] acquired machines lock for "ingress-addon-legacy-235631" in 19.844µs
	I0914 21:49:46.030217   21723 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-235631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-235631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 21:49:46.030306   21723 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 21:49:46.032757   21723 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0914 21:49:46.032939   21723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:49:46.032976   21723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:49:46.046875   21723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0914 21:49:46.047286   21723 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:49:46.047819   21723 main.go:141] libmachine: Using API Version  1
	I0914 21:49:46.047840   21723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:49:46.048175   21723 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:49:46.048365   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetMachineName
	I0914 21:49:46.048522   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .DriverName
	I0914 21:49:46.048673   21723 start.go:159] libmachine.API.Create for "ingress-addon-legacy-235631" (driver="kvm2")
	I0914 21:49:46.048709   21723 client.go:168] LocalClient.Create starting
	I0914 21:49:46.048744   21723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem
	I0914 21:49:46.048778   21723 main.go:141] libmachine: Decoding PEM data...
	I0914 21:49:46.048794   21723 main.go:141] libmachine: Parsing certificate...
	I0914 21:49:46.048847   21723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem
	I0914 21:49:46.048866   21723 main.go:141] libmachine: Decoding PEM data...
	I0914 21:49:46.048877   21723 main.go:141] libmachine: Parsing certificate...
	I0914 21:49:46.048894   21723 main.go:141] libmachine: Running pre-create checks...
	I0914 21:49:46.048904   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .PreCreateCheck
	I0914 21:49:46.049260   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetConfigRaw
	I0914 21:49:46.049621   21723 main.go:141] libmachine: Creating machine...
	I0914 21:49:46.049636   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .Create
	I0914 21:49:46.049755   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Creating KVM machine...
	I0914 21:49:46.051098   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found existing default KVM network
	I0914 21:49:46.051790   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:46.051658   21778 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a00}
	I0914 21:49:46.057165   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | trying to create private KVM network mk-ingress-addon-legacy-235631 192.168.39.0/24...
	I0914 21:49:46.121833   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | private KVM network mk-ingress-addon-legacy-235631 192.168.39.0/24 created
	I0914 21:49:46.121901   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:46.121785   21778 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:49:46.121923   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Setting up store path in /home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631 ...
	I0914 21:49:46.121950   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Building disk image from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso
	I0914 21:49:46.121974   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Downloading /home/jenkins/minikube-integration/17243-6287/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso...
	I0914 21:49:46.323457   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:46.323337   21778 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/id_rsa...
	I0914 21:49:46.506746   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:46.506644   21778 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/ingress-addon-legacy-235631.rawdisk...
	I0914 21:49:46.506774   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Writing magic tar header
	I0914 21:49:46.506788   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Writing SSH key tar header
	I0914 21:49:46.506797   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:46.506770   21778 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631 ...
	I0914 21:49:46.506944   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631
	I0914 21:49:46.506983   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines
	I0914 21:49:46.507003   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631 (perms=drwx------)
	I0914 21:49:46.507024   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines (perms=drwxr-xr-x)
	I0914 21:49:46.507039   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube (perms=drwxr-xr-x)
	I0914 21:49:46.507059   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:49:46.507084   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287 (perms=drwxrwxr-x)
	I0914 21:49:46.507098   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287
	I0914 21:49:46.507108   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 21:49:46.507116   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Checking permissions on dir: /home/jenkins
	I0914 21:49:46.507124   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Checking permissions on dir: /home
	I0914 21:49:46.507132   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Skipping /home - not owner
	I0914 21:49:46.507142   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 21:49:46.507151   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 21:49:46.507177   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Creating domain...
	I0914 21:49:46.508153   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) define libvirt domain using xml: 
	I0914 21:49:46.508174   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) <domain type='kvm'>
	I0914 21:49:46.508189   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   <name>ingress-addon-legacy-235631</name>
	I0914 21:49:46.508208   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   <memory unit='MiB'>4096</memory>
	I0914 21:49:46.508230   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   <vcpu>2</vcpu>
	I0914 21:49:46.508244   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   <features>
	I0914 21:49:46.508258   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <acpi/>
	I0914 21:49:46.508271   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <apic/>
	I0914 21:49:46.508295   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <pae/>
	I0914 21:49:46.508315   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     
	I0914 21:49:46.508326   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   </features>
	I0914 21:49:46.508337   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   <cpu mode='host-passthrough'>
	I0914 21:49:46.508345   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   
	I0914 21:49:46.508351   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   </cpu>
	I0914 21:49:46.508360   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   <os>
	I0914 21:49:46.508370   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <type>hvm</type>
	I0914 21:49:46.508379   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <boot dev='cdrom'/>
	I0914 21:49:46.508387   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <boot dev='hd'/>
	I0914 21:49:46.508396   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <bootmenu enable='no'/>
	I0914 21:49:46.508401   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   </os>
	I0914 21:49:46.508409   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   <devices>
	I0914 21:49:46.508419   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <disk type='file' device='cdrom'>
	I0914 21:49:46.508429   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/boot2docker.iso'/>
	I0914 21:49:46.508441   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <target dev='hdc' bus='scsi'/>
	I0914 21:49:46.508449   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <readonly/>
	I0914 21:49:46.508459   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     </disk>
	I0914 21:49:46.508466   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <disk type='file' device='disk'>
	I0914 21:49:46.508477   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 21:49:46.508494   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/ingress-addon-legacy-235631.rawdisk'/>
	I0914 21:49:46.508502   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <target dev='hda' bus='virtio'/>
	I0914 21:49:46.508508   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     </disk>
	I0914 21:49:46.508516   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <interface type='network'>
	I0914 21:49:46.508528   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <source network='mk-ingress-addon-legacy-235631'/>
	I0914 21:49:46.508538   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <model type='virtio'/>
	I0914 21:49:46.508544   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     </interface>
	I0914 21:49:46.508552   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <interface type='network'>
	I0914 21:49:46.508559   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <source network='default'/>
	I0914 21:49:46.508567   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <model type='virtio'/>
	I0914 21:49:46.508573   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     </interface>
	I0914 21:49:46.508581   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <serial type='pty'>
	I0914 21:49:46.508594   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <target port='0'/>
	I0914 21:49:46.508602   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     </serial>
	I0914 21:49:46.508609   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <console type='pty'>
	I0914 21:49:46.508617   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <target type='serial' port='0'/>
	I0914 21:49:46.508623   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     </console>
	I0914 21:49:46.508634   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     <rng model='virtio'>
	I0914 21:49:46.508657   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)       <backend model='random'>/dev/random</backend>
	I0914 21:49:46.508672   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     </rng>
	I0914 21:49:46.508684   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     
	I0914 21:49:46.508697   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)     
	I0914 21:49:46.508713   21723 main.go:141] libmachine: (ingress-addon-legacy-235631)   </devices>
	I0914 21:49:46.508725   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) </domain>
	I0914 21:49:46.508742   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) 
	I0914 21:49:46.512668   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:0f:69:e7 in network default
	I0914 21:49:46.513256   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Ensuring networks are active...
	I0914 21:49:46.513284   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:46.513986   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Ensuring network default is active
	I0914 21:49:46.514332   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Ensuring network mk-ingress-addon-legacy-235631 is active
	I0914 21:49:46.514853   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Getting domain xml...
	I0914 21:49:46.515572   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Creating domain...
	I0914 21:49:47.696524   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Waiting to get IP...
	I0914 21:49:47.697248   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:47.697602   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:47.697643   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:47.697595   21778 retry.go:31] will retry after 257.352859ms: waiting for machine to come up
	I0914 21:49:47.956140   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:47.956542   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:47.956571   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:47.956507   21778 retry.go:31] will retry after 319.812649ms: waiting for machine to come up
	I0914 21:49:48.278107   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:48.278530   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:48.278576   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:48.278478   21778 retry.go:31] will retry after 300.444802ms: waiting for machine to come up
	I0914 21:49:48.580875   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:48.581334   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:48.581363   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:48.581296   21778 retry.go:31] will retry after 421.026872ms: waiting for machine to come up
	I0914 21:49:49.003719   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:49.004141   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:49.004167   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:49.004094   21778 retry.go:31] will retry after 536.24248ms: waiting for machine to come up
	I0914 21:49:49.541691   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:49.542000   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:49.542022   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:49.541950   21778 retry.go:31] will retry after 922.238445ms: waiting for machine to come up
	I0914 21:49:50.465918   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:50.466252   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:50.466285   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:50.466203   21778 retry.go:31] will retry after 831.176512ms: waiting for machine to come up
	I0914 21:49:51.298318   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:51.298669   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:51.298699   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:51.298618   21778 retry.go:31] will retry after 1.074733227s: waiting for machine to come up
	I0914 21:49:52.374982   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:52.375361   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:52.375392   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:52.375330   21778 retry.go:31] will retry after 1.377380752s: waiting for machine to come up
	I0914 21:49:53.753864   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:53.754242   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:53.754271   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:53.754192   21778 retry.go:31] will retry after 2.123660768s: waiting for machine to come up
	I0914 21:49:55.879128   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:55.879615   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:55.879652   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:55.879551   21778 retry.go:31] will retry after 2.788883229s: waiting for machine to come up
	I0914 21:49:58.671973   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:49:58.672308   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:49:58.672358   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:49:58.672286   21778 retry.go:31] will retry after 2.350742606s: waiting for machine to come up
	I0914 21:50:01.024425   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:01.024900   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:50:01.024925   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:50:01.024813   21778 retry.go:31] will retry after 3.2473139s: waiting for machine to come up
	I0914 21:50:04.275436   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:04.275848   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find current IP address of domain ingress-addon-legacy-235631 in network mk-ingress-addon-legacy-235631
	I0914 21:50:04.275871   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | I0914 21:50:04.275802   21778 retry.go:31] will retry after 4.703511423s: waiting for machine to come up
	I0914 21:50:08.982734   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:08.983190   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Found IP for machine: 192.168.39.250
	I0914 21:50:08.983221   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has current primary IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:08.983233   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Reserving static IP address...
	I0914 21:50:08.983560   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-235631", mac: "52:54:00:69:1a:a2", ip: "192.168.39.250"} in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.052166   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Getting to WaitForSSH function...
	I0914 21:50:09.052204   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Reserved static IP address: 192.168.39.250
	I0914 21:50:09.052220   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Waiting for SSH to be available...
	I0914 21:50:09.054728   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.055057   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:09.055084   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.055215   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Using SSH client type: external
	I0914 21:50:09.055241   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/id_rsa (-rw-------)
	I0914 21:50:09.055280   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 21:50:09.055304   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | About to run SSH command:
	I0914 21:50:09.055320   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | exit 0
	I0914 21:50:09.146991   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | SSH cmd err, output: <nil>: 
	I0914 21:50:09.147201   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) KVM machine creation complete!
	I0914 21:50:09.147564   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetConfigRaw
	I0914 21:50:09.148149   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .DriverName
	I0914 21:50:09.148342   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .DriverName
	I0914 21:50:09.148494   21723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 21:50:09.148512   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetState
	I0914 21:50:09.149548   21723 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 21:50:09.149560   21723 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 21:50:09.149567   21723 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 21:50:09.149573   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:09.151891   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.152212   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:09.152234   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.152339   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:09.152527   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.152683   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.152793   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:09.152969   21723 main.go:141] libmachine: Using SSH client type: native
	I0914 21:50:09.153287   21723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0914 21:50:09.153299   21723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 21:50:09.270373   21723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 21:50:09.270396   21723 main.go:141] libmachine: Detecting the provisioner...
	I0914 21:50:09.270408   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:09.273222   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.273639   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:09.273680   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.273822   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:09.274029   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.274200   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.274345   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:09.274508   21723 main.go:141] libmachine: Using SSH client type: native
	I0914 21:50:09.274870   21723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0914 21:50:09.274885   21723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 21:50:09.395572   21723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g52d8811-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0914 21:50:09.395637   21723 main.go:141] libmachine: found compatible host: buildroot
	I0914 21:50:09.395648   21723 main.go:141] libmachine: Provisioning with buildroot...
	I0914 21:50:09.395663   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetMachineName
	I0914 21:50:09.395916   21723 buildroot.go:166] provisioning hostname "ingress-addon-legacy-235631"
	I0914 21:50:09.395943   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetMachineName
	I0914 21:50:09.396124   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:09.398646   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.398963   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:09.398986   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.399134   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:09.399303   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.399482   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.399618   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:09.399784   21723 main.go:141] libmachine: Using SSH client type: native
	I0914 21:50:09.400074   21723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0914 21:50:09.400089   21723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-235631 && echo "ingress-addon-legacy-235631" | sudo tee /etc/hostname
	I0914 21:50:09.531978   21723 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-235631
	
	I0914 21:50:09.532005   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:09.534603   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.534911   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:09.534948   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.535077   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:09.535266   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.535428   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.535564   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:09.535753   21723 main.go:141] libmachine: Using SSH client type: native
	I0914 21:50:09.536103   21723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0914 21:50:09.536128   21723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-235631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-235631/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-235631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 21:50:09.658300   21723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 21:50:09.658330   21723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 21:50:09.658356   21723 buildroot.go:174] setting up certificates
	I0914 21:50:09.658368   21723 provision.go:83] configureAuth start
	I0914 21:50:09.658381   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetMachineName
	I0914 21:50:09.658661   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetIP
	I0914 21:50:09.661317   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.661674   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:09.661704   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.661842   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:09.663926   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.664255   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:09.664286   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.664397   21723 provision.go:138] copyHostCerts
	I0914 21:50:09.664429   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 21:50:09.664460   21723 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 21:50:09.664469   21723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 21:50:09.664527   21723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 21:50:09.664612   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 21:50:09.664630   21723 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 21:50:09.664636   21723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 21:50:09.664660   21723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 21:50:09.664701   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 21:50:09.664716   21723 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 21:50:09.664722   21723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 21:50:09.664746   21723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 21:50:09.664789   21723 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-235631 san=[192.168.39.250 192.168.39.250 localhost 127.0.0.1 minikube ingress-addon-legacy-235631]
	I0914 21:50:09.808680   21723 provision.go:172] copyRemoteCerts
	I0914 21:50:09.808745   21723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 21:50:09.808776   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:09.811434   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.811794   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:09.811828   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.811963   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:09.812153   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.812295   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:09.812496   21723 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/id_rsa Username:docker}
	I0914 21:50:09.900421   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 21:50:09.900492   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 21:50:09.920911   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 21:50:09.920975   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0914 21:50:09.940915   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 21:50:09.940983   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 21:50:09.960765   21723 provision.go:86] duration metric: configureAuth took 302.383018ms
	I0914 21:50:09.960794   21723 buildroot.go:189] setting minikube options for container-runtime
	I0914 21:50:09.960968   21723 config.go:182] Loaded profile config "ingress-addon-legacy-235631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0914 21:50:09.961037   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:09.963769   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.964139   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:09.964164   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:09.964304   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:09.964530   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.964700   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:09.964912   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:09.965054   21723 main.go:141] libmachine: Using SSH client type: native
	I0914 21:50:09.965414   21723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0914 21:50:09.965431   21723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 21:50:10.265437   21723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 21:50:10.265460   21723 main.go:141] libmachine: Checking connection to Docker...
	I0914 21:50:10.265469   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetURL
	I0914 21:50:10.266638   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Using libvirt version 6000000
	I0914 21:50:10.268956   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.269223   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:10.269256   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.269430   21723 main.go:141] libmachine: Docker is up and running!
	I0914 21:50:10.269451   21723 main.go:141] libmachine: Reticulating splines...
	I0914 21:50:10.269457   21723 client.go:171] LocalClient.Create took 24.220738272s
	I0914 21:50:10.269474   21723 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-235631" took 24.220802997s
	I0914 21:50:10.269487   21723 start.go:300] post-start starting for "ingress-addon-legacy-235631" (driver="kvm2")
	I0914 21:50:10.269496   21723 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 21:50:10.269511   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .DriverName
	I0914 21:50:10.269750   21723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 21:50:10.269782   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:10.271945   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.272228   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:10.272256   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.272353   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:10.272534   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:10.272703   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:10.272876   21723 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/id_rsa Username:docker}
	I0914 21:50:10.360533   21723 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 21:50:10.364221   21723 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 21:50:10.364237   21723 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 21:50:10.364289   21723 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 21:50:10.364353   21723 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 21:50:10.364362   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /etc/ssl/certs/134852.pem
	I0914 21:50:10.364440   21723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 21:50:10.372413   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 21:50:10.391667   21723 start.go:303] post-start completed in 122.168008ms
	I0914 21:50:10.391704   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetConfigRaw
	I0914 21:50:10.392203   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetIP
	I0914 21:50:10.394544   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.394854   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:10.394905   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.395084   21723 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/config.json ...
	I0914 21:50:10.395259   21723 start.go:128] duration metric: createHost completed in 24.364941152s
	I0914 21:50:10.395280   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:10.397485   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.397818   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:10.397841   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.398004   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:10.398197   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:10.398351   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:10.398480   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:10.398622   21723 main.go:141] libmachine: Using SSH client type: native
	I0914 21:50:10.398995   21723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0914 21:50:10.399012   21723 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 21:50:10.515613   21723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694728210.491229023
	
	I0914 21:50:10.515641   21723 fix.go:206] guest clock: 1694728210.491229023
	I0914 21:50:10.515648   21723 fix.go:219] Guest: 2023-09-14 21:50:10.491229023 +0000 UTC Remote: 2023-09-14 21:50:10.395269666 +0000 UTC m=+41.858230020 (delta=95.959357ms)
	I0914 21:50:10.515665   21723 fix.go:190] guest clock delta is within tolerance: 95.959357ms
	I0914 21:50:10.515669   21723 start.go:83] releasing machines lock for "ingress-addon-legacy-235631", held for 24.485465689s
	I0914 21:50:10.515688   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .DriverName
	I0914 21:50:10.515954   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetIP
	I0914 21:50:10.518240   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.518578   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:10.518602   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.518702   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .DriverName
	I0914 21:50:10.519172   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .DriverName
	I0914 21:50:10.519328   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .DriverName
	I0914 21:50:10.519380   21723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 21:50:10.519422   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:10.519560   21723 ssh_runner.go:195] Run: cat /version.json
	I0914 21:50:10.519580   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:10.522018   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.522253   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.522329   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:10.522365   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.522487   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:10.522630   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:10.522703   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:10.522734   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:10.522800   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:10.522899   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:10.522964   21723 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/id_rsa Username:docker}
	I0914 21:50:10.523025   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:10.523143   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:10.523277   21723 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/id_rsa Username:docker}
	I0914 21:50:10.640569   21723 ssh_runner.go:195] Run: systemctl --version
	I0914 21:50:10.646130   21723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 21:50:10.802708   21723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 21:50:10.808760   21723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 21:50:10.808822   21723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 21:50:10.822022   21723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 21:50:10.822056   21723 start.go:469] detecting cgroup driver to use...
	I0914 21:50:10.822125   21723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 21:50:10.836645   21723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 21:50:10.847607   21723 docker.go:196] disabling cri-docker service (if available) ...
	I0914 21:50:10.847652   21723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 21:50:10.858131   21723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 21:50:10.869070   21723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 21:50:10.961518   21723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 21:50:11.071458   21723 docker.go:212] disabling docker service ...
	I0914 21:50:11.071552   21723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 21:50:11.083958   21723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 21:50:11.094470   21723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 21:50:11.189150   21723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 21:50:11.281677   21723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 21:50:11.292862   21723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 21:50:11.307919   21723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 21:50:11.307995   21723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:50:11.316573   21723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 21:50:11.316620   21723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:50:11.325251   21723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:50:11.333761   21723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:50:11.342258   21723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 21:50:11.350906   21723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 21:50:11.358445   21723 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 21:50:11.358498   21723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 21:50:11.369456   21723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 21:50:11.377608   21723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 21:50:11.471057   21723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 21:50:11.621697   21723 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 21:50:11.621766   21723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 21:50:11.626041   21723 start.go:537] Will wait 60s for crictl version
	I0914 21:50:11.626088   21723 ssh_runner.go:195] Run: which crictl
	I0914 21:50:11.629357   21723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 21:50:11.662506   21723 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 21:50:11.662560   21723 ssh_runner.go:195] Run: crio --version
	I0914 21:50:11.702190   21723 ssh_runner.go:195] Run: crio --version
	I0914 21:50:11.742711   21723 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0914 21:50:11.744194   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetIP
	I0914 21:50:11.747100   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:11.747396   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:11.747431   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:11.747566   21723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 21:50:11.751067   21723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 21:50:11.762300   21723 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 21:50:11.762353   21723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 21:50:11.785763   21723 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0914 21:50:11.785817   21723 ssh_runner.go:195] Run: which lz4
	I0914 21:50:11.788962   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0914 21:50:11.789049   21723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 21:50:11.792562   21723 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 21:50:11.792584   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0914 21:50:13.825131   21723 crio.go:444] Took 2.036100 seconds to copy over tarball
	I0914 21:50:13.825201   21723 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 21:50:16.679831   21723 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.854603372s)
	I0914 21:50:16.679855   21723 crio.go:451] Took 2.854701 seconds to extract the tarball
	I0914 21:50:16.679866   21723 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 21:50:16.722431   21723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 21:50:16.767943   21723 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0914 21:50:16.767966   21723 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 21:50:16.768047   21723 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 21:50:16.768069   21723 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0914 21:50:16.768085   21723 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0914 21:50:16.768112   21723 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 21:50:16.768127   21723 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 21:50:16.768078   21723 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0914 21:50:16.768095   21723 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 21:50:16.768047   21723 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 21:50:16.769281   21723 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 21:50:16.769493   21723 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 21:50:16.769507   21723 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0914 21:50:16.769508   21723 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 21:50:16.769534   21723 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 21:50:16.769555   21723 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0914 21:50:16.769511   21723 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 21:50:16.769509   21723 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 21:50:16.945947   21723 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 21:50:16.980728   21723 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 21:50:16.980764   21723 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 21:50:16.980811   21723 ssh_runner.go:195] Run: which crictl
	I0914 21:50:16.984418   21723 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 21:50:16.990116   21723 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0914 21:50:16.999377   21723 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0914 21:50:17.006143   21723 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0914 21:50:17.015249   21723 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 21:50:17.019199   21723 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 21:50:17.024100   21723 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0914 21:50:17.046411   21723 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0914 21:50:17.098670   21723 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0914 21:50:17.098705   21723 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 21:50:17.098743   21723 ssh_runner.go:195] Run: which crictl
	I0914 21:50:17.103744   21723 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0914 21:50:17.103769   21723 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 21:50:17.103812   21723 ssh_runner.go:195] Run: which crictl
	I0914 21:50:17.139209   21723 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0914 21:50:17.139241   21723 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0914 21:50:17.139275   21723 ssh_runner.go:195] Run: which crictl
	I0914 21:50:17.146549   21723 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0914 21:50:17.146581   21723 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 21:50:17.146613   21723 ssh_runner.go:195] Run: which crictl
	I0914 21:50:17.147927   21723 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0914 21:50:17.147951   21723 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 21:50:17.147976   21723 ssh_runner.go:195] Run: which crictl
	I0914 21:50:17.158134   21723 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0914 21:50:17.158160   21723 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0914 21:50:17.158171   21723 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0914 21:50:17.158216   21723 ssh_runner.go:195] Run: which crictl
	I0914 21:50:17.158232   21723 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0914 21:50:17.158274   21723 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0914 21:50:17.158309   21723 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 21:50:17.161435   21723 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0914 21:50:17.237062   21723 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0914 21:50:17.237128   21723 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0914 21:50:17.237211   21723 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0914 21:50:17.237272   21723 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0914 21:50:17.237303   21723 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0914 21:50:17.243151   21723 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0914 21:50:17.266704   21723 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0914 21:50:17.607572   21723 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 21:50:17.743563   21723 cache_images.go:92] LoadImages completed in 975.584075ms
	W0914 21:50:17.743632   21723 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0914 21:50:17.743714   21723 ssh_runner.go:195] Run: crio config
	I0914 21:50:17.799566   21723 cni.go:84] Creating CNI manager for ""
	I0914 21:50:17.799590   21723 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 21:50:17.799610   21723 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 21:50:17.799633   21723 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-235631 NodeName:ingress-addon-legacy-235631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 21:50:17.799816   21723 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-235631"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 21:50:17.799894   21723 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-235631 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-235631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 21:50:17.799945   21723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0914 21:50:17.808637   21723 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 21:50:17.808689   21723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 21:50:17.816388   21723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0914 21:50:17.830767   21723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0914 21:50:17.844510   21723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0914 21:50:17.858983   21723 ssh_runner.go:195] Run: grep 192.168.39.250	control-plane.minikube.internal$ /etc/hosts
	I0914 21:50:17.862226   21723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 21:50:17.872738   21723 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631 for IP: 192.168.39.250
	I0914 21:50:17.872765   21723 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:50:17.872932   21723 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 21:50:17.872980   21723 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 21:50:17.873034   21723 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.key
	I0914 21:50:17.873055   21723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt with IP's: []
	I0914 21:50:18.086347   21723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt ...
	I0914 21:50:18.086379   21723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: {Name:mkbe0840de3a8b673a8b720cbbc52238a7b1963f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:50:18.086573   21723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.key ...
	I0914 21:50:18.086591   21723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.key: {Name:mkffc4d427beb4a5a8fa088fe2ccac1d2adf8051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:50:18.086714   21723 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.key.6e35f005
	I0914 21:50:18.086741   21723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.crt.6e35f005 with IP's: [192.168.39.250 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 21:50:18.198809   21723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.crt.6e35f005 ...
	I0914 21:50:18.198839   21723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.crt.6e35f005: {Name:mkea137c8a35ea7f484e87baccc075ea356d4aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:50:18.199017   21723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.key.6e35f005 ...
	I0914 21:50:18.199037   21723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.key.6e35f005: {Name:mkc0a1317253fa5ef9fe188490c94ff610c2bea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:50:18.199134   21723 certs.go:337] copying /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.crt.6e35f005 -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.crt
	I0914 21:50:18.199229   21723 certs.go:341] copying /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.key.6e35f005 -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.key
	I0914 21:50:18.199317   21723 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/proxy-client.key
	I0914 21:50:18.199340   21723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/proxy-client.crt with IP's: []
	I0914 21:50:18.335123   21723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/proxy-client.crt ...
	I0914 21:50:18.335158   21723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/proxy-client.crt: {Name:mk442ff6438b2d48d130f64baf273682016ff6df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:50:18.335337   21723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/proxy-client.key ...
	I0914 21:50:18.335357   21723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/proxy-client.key: {Name:mka73832811f7cad15fd3b690ad4aeb9716287c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:50:18.335454   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 21:50:18.335500   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 21:50:18.335526   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 21:50:18.335541   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 21:50:18.335559   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 21:50:18.335584   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 21:50:18.335606   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 21:50:18.335628   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 21:50:18.335713   21723 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 21:50:18.335764   21723 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 21:50:18.335781   21723 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 21:50:18.335816   21723 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 21:50:18.335856   21723 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 21:50:18.335900   21723 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 21:50:18.335967   21723 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 21:50:18.336011   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /usr/share/ca-certificates/134852.pem
	I0914 21:50:18.336034   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:50:18.336057   21723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem -> /usr/share/ca-certificates/13485.pem
	I0914 21:50:18.336680   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 21:50:18.358686   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 21:50:18.379390   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 21:50:18.399726   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 21:50:18.420310   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 21:50:18.440524   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 21:50:18.460510   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 21:50:18.480363   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 21:50:18.500018   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 21:50:18.519863   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 21:50:18.540065   21723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 21:50:18.559883   21723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 21:50:18.573856   21723 ssh_runner.go:195] Run: openssl version
	I0914 21:50:18.619317   21723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 21:50:18.628149   21723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 21:50:18.632165   21723 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 21:50:18.632205   21723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 21:50:18.637032   21723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 21:50:18.645122   21723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 21:50:18.653297   21723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 21:50:18.657300   21723 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 21:50:18.657343   21723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 21:50:18.662162   21723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 21:50:18.670383   21723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 21:50:18.678689   21723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:50:18.682521   21723 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:50:18.682565   21723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:50:18.687260   21723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 21:50:18.695409   21723 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 21:50:18.698723   21723 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 21:50:18.698773   21723 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-235631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-235631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:50:18.698852   21723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 21:50:18.698908   21723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 21:50:18.724769   21723 cri.go:89] found id: ""
	I0914 21:50:18.724843   21723 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 21:50:18.732793   21723 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 21:50:18.740526   21723 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 21:50:18.748749   21723 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 21:50:18.748785   21723 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0914 21:50:18.807516   21723 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0914 21:50:18.807743   21723 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 21:50:18.927397   21723 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 21:50:18.927530   21723 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 21:50:18.927631   21723 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 21:50:19.079032   21723 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 21:50:19.079179   21723 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 21:50:19.079237   21723 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 21:50:19.192113   21723 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 21:50:19.194926   21723 out.go:204]   - Generating certificates and keys ...
	I0914 21:50:19.195037   21723 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 21:50:19.195143   21723 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 21:50:19.458708   21723 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 21:50:19.672135   21723 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 21:50:20.086197   21723 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 21:50:20.280689   21723 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 21:50:20.411138   21723 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 21:50:20.411279   21723 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-235631 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I0914 21:50:20.564569   21723 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 21:50:20.564728   21723 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-235631 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I0914 21:50:20.742977   21723 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 21:50:20.977128   21723 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 21:50:21.203812   21723 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 21:50:21.203882   21723 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 21:50:21.308336   21723 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 21:50:21.448015   21723 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 21:50:21.748671   21723 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 21:50:22.043292   21723 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 21:50:22.044043   21723 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 21:50:22.045868   21723 out.go:204]   - Booting up control plane ...
	I0914 21:50:22.045972   21723 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 21:50:22.049624   21723 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 21:50:22.054770   21723 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 21:50:22.054875   21723 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 21:50:22.055073   21723 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 21:50:31.056154   21723 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003348 seconds
	I0914 21:50:31.056311   21723 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 21:50:31.069119   21723 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 21:50:31.586048   21723 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 21:50:31.586237   21723 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-235631 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 21:50:32.095909   21723 kubeadm.go:322] [bootstrap-token] Using token: ce9ccb.kcwm9h9v85mujl6k
	I0914 21:50:32.097706   21723 out.go:204]   - Configuring RBAC rules ...
	I0914 21:50:32.097862   21723 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 21:50:32.108377   21723 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 21:50:32.116797   21723 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 21:50:32.120969   21723 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 21:50:32.123685   21723 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 21:50:32.127370   21723 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 21:50:32.139112   21723 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 21:50:32.399616   21723 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 21:50:32.522944   21723 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 21:50:32.522967   21723 kubeadm.go:322] 
	I0914 21:50:32.523033   21723 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 21:50:32.523053   21723 kubeadm.go:322] 
	I0914 21:50:32.523135   21723 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 21:50:32.523166   21723 kubeadm.go:322] 
	I0914 21:50:32.523216   21723 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 21:50:32.523297   21723 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 21:50:32.523376   21723 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 21:50:32.523393   21723 kubeadm.go:322] 
	I0914 21:50:32.523478   21723 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 21:50:32.523583   21723 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 21:50:32.523680   21723 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 21:50:32.523689   21723 kubeadm.go:322] 
	I0914 21:50:32.523789   21723 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 21:50:32.523922   21723 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 21:50:32.523936   21723 kubeadm.go:322] 
	I0914 21:50:32.524050   21723 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ce9ccb.kcwm9h9v85mujl6k \
	I0914 21:50:32.524182   21723 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 21:50:32.524218   21723 kubeadm.go:322]     --control-plane 
	I0914 21:50:32.524228   21723 kubeadm.go:322] 
	I0914 21:50:32.524360   21723 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 21:50:32.524371   21723 kubeadm.go:322] 
	I0914 21:50:32.524462   21723 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ce9ccb.kcwm9h9v85mujl6k \
	I0914 21:50:32.524607   21723 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 21:50:32.524796   21723 kubeadm.go:322] W0914 21:50:18.791963     966 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0914 21:50:32.524882   21723 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 21:50:32.525057   21723 kubeadm.go:322] W0914 21:50:22.036877     966 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0914 21:50:32.525231   21723 kubeadm.go:322] W0914 21:50:22.037856     966 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0914 21:50:32.525244   21723 cni.go:84] Creating CNI manager for ""
	I0914 21:50:32.525261   21723 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 21:50:32.526905   21723 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 21:50:32.528242   21723 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 21:50:32.538322   21723 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 21:50:32.554282   21723 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 21:50:32.554377   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=ingress-addon-legacy-235631 minikube.k8s.io/updated_at=2023_09_14T21_50_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:32.554377   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:32.572507   21723 ops.go:34] apiserver oom_adj: -16
	I0914 21:50:32.846312   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:33.027134   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:33.613181   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:34.112795   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:34.613150   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:35.112536   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:35.613185   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:36.112598   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:36.613429   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:37.112757   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:37.612788   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:38.113077   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:38.613086   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:39.112586   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:39.613077   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:40.112990   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:40.612578   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:41.113099   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:41.613130   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:42.113158   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:42.612722   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:43.113044   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:43.613162   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:44.113219   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:44.612641   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:45.113317   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:45.613310   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:46.113389   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:46.612876   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:47.112476   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:47.613447   21723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:50:47.733803   21723 kubeadm.go:1081] duration metric: took 15.179485281s to wait for elevateKubeSystemPrivileges.
	I0914 21:50:47.733841   21723 kubeadm.go:406] StartCluster complete in 29.035075389s
	I0914 21:50:47.733862   21723 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:50:47.733943   21723 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:50:47.734600   21723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:50:47.734834   21723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 21:50:47.734874   21723 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 21:50:47.734972   21723 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-235631"
	I0914 21:50:47.734996   21723 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-235631"
	I0914 21:50:47.735001   21723 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-235631"
	I0914 21:50:47.735028   21723 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-235631"
	I0914 21:50:47.735053   21723 config.go:182] Loaded profile config "ingress-addon-legacy-235631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0914 21:50:47.735056   21723 host.go:66] Checking if "ingress-addon-legacy-235631" exists ...
	I0914 21:50:47.735530   21723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:50:47.735563   21723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:50:47.735577   21723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:50:47.735591   21723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:50:47.735584   21723 kapi.go:59] client config for ingress-addon-legacy-235631: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 21:50:47.736396   21723 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 21:50:47.750534   21723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46533
	I0914 21:50:47.751031   21723 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:50:47.751559   21723 main.go:141] libmachine: Using API Version  1
	I0914 21:50:47.751582   21723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:50:47.751882   21723 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:50:47.752351   21723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:50:47.752380   21723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:50:47.754321   21723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0914 21:50:47.754740   21723 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:50:47.755160   21723 main.go:141] libmachine: Using API Version  1
	I0914 21:50:47.755181   21723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:50:47.755554   21723 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:50:47.755721   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetState
	I0914 21:50:47.758395   21723 kapi.go:59] client config for ingress-addon-legacy-235631: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 21:50:47.765156   21723 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-235631"
	I0914 21:50:47.765197   21723 host.go:66] Checking if "ingress-addon-legacy-235631" exists ...
	I0914 21:50:47.765586   21723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:50:47.765619   21723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:50:47.768212   21723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I0914 21:50:47.768608   21723 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:50:47.769117   21723 main.go:141] libmachine: Using API Version  1
	I0914 21:50:47.769142   21723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:50:47.769449   21723 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:50:47.769627   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetState
	I0914 21:50:47.771192   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .DriverName
	I0914 21:50:47.773110   21723 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 21:50:47.775086   21723 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 21:50:47.775102   21723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 21:50:47.775123   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:47.778417   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:47.778883   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:47.778919   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:47.779054   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:47.779228   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:47.779426   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:47.779613   21723 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/id_rsa Username:docker}
	I0914 21:50:47.781531   21723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0914 21:50:47.781874   21723 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:50:47.782256   21723 main.go:141] libmachine: Using API Version  1
	I0914 21:50:47.782275   21723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:50:47.782544   21723 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:50:47.783107   21723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:50:47.783143   21723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:50:47.797165   21723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40249
	I0914 21:50:47.797585   21723 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:50:47.798029   21723 main.go:141] libmachine: Using API Version  1
	I0914 21:50:47.798048   21723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:50:47.798335   21723 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:50:47.798547   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetState
	I0914 21:50:47.800183   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .DriverName
	I0914 21:50:47.800411   21723 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 21:50:47.800426   21723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 21:50:47.800444   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHHostname
	I0914 21:50:47.802794   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:47.803244   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:1a:a2", ip: ""} in network mk-ingress-addon-legacy-235631: {Iface:virbr1 ExpiryTime:2023-09-14 22:50:00 +0000 UTC Type:0 Mac:52:54:00:69:1a:a2 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ingress-addon-legacy-235631 Clientid:01:52:54:00:69:1a:a2}
	I0914 21:50:47.803284   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | domain ingress-addon-legacy-235631 has defined IP address 192.168.39.250 and MAC address 52:54:00:69:1a:a2 in network mk-ingress-addon-legacy-235631
	I0914 21:50:47.803434   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHPort
	I0914 21:50:47.803603   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHKeyPath
	I0914 21:50:47.803792   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .GetSSHUsername
	I0914 21:50:47.803940   21723 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/ingress-addon-legacy-235631/id_rsa Username:docker}
	I0914 21:50:47.813353   21723 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-235631" context rescaled to 1 replicas
	I0914 21:50:47.813394   21723 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 21:50:47.815939   21723 out.go:177] * Verifying Kubernetes components...
	I0914 21:50:47.817373   21723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 21:50:47.958033   21723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 21:50:48.042043   21723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 21:50:48.177352   21723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 21:50:48.178040   21723 kapi.go:59] client config for ingress-addon-legacy-235631: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 21:50:48.178392   21723 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-235631" to be "Ready" ...
	I0914 21:50:48.194660   21723 node_ready.go:49] node "ingress-addon-legacy-235631" has status "Ready":"True"
	I0914 21:50:48.194684   21723 node_ready.go:38] duration metric: took 16.254651ms waiting for node "ingress-addon-legacy-235631" to be "Ready" ...
	I0914 21:50:48.194695   21723 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 21:50:48.217283   21723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-6sf5k" in "kube-system" namespace to be "Ready" ...
	I0914 21:50:48.835839   21723 main.go:141] libmachine: Making call to close driver server
	I0914 21:50:48.835863   21723 main.go:141] libmachine: Making call to close driver server
	I0914 21:50:48.835870   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .Close
	I0914 21:50:48.835876   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .Close
	I0914 21:50:48.835963   21723 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0914 21:50:48.836160   21723 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:50:48.836175   21723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:50:48.836187   21723 main.go:141] libmachine: Making call to close driver server
	I0914 21:50:48.836199   21723 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:50:48.836217   21723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:50:48.836245   21723 main.go:141] libmachine: Making call to close driver server
	I0914 21:50:48.836221   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .Close
	I0914 21:50:48.836259   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .Close
	I0914 21:50:48.836493   21723 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:50:48.836511   21723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:50:48.836497   21723 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:50:48.836538   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Closing plugin on server side
	I0914 21:50:48.836549   21723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:50:48.836570   21723 main.go:141] libmachine: Making call to close driver server
	I0914 21:50:48.836580   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) Calling .Close
	I0914 21:50:48.836820   21723 main.go:141] libmachine: (ingress-addon-legacy-235631) DBG | Closing plugin on server side
	I0914 21:50:48.836782   21723 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:50:48.836852   21723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:50:48.838565   21723 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 21:50:48.840559   21723 addons.go:502] enable addons completed in 1.105691577s: enabled=[storage-provisioner default-storageclass]
	I0914 21:50:49.221776   21723 pod_ready.go:97] error getting pod "coredns-66bff467f8-6sf5k" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-6sf5k" not found
	I0914 21:50:49.221812   21723 pod_ready.go:81] duration metric: took 1.004497634s waiting for pod "coredns-66bff467f8-6sf5k" in "kube-system" namespace to be "Ready" ...
	E0914 21:50:49.221826   21723 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-6sf5k" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-6sf5k" not found
	I0914 21:50:49.221839   21723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-864g6" in "kube-system" namespace to be "Ready" ...
	I0914 21:50:51.246259   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:50:53.246535   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:50:55.746666   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:50:57.746957   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:51:00.247510   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:51:02.745612   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:51:04.746159   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:51:06.747340   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:51:09.249361   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:51:11.746102   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:51:13.747514   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:51:16.247202   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:51:18.746172   21723 pod_ready.go:102] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"False"
	I0914 21:51:20.250570   21723 pod_ready.go:92] pod "coredns-66bff467f8-864g6" in "kube-system" namespace has status "Ready":"True"
	I0914 21:51:20.250595   21723 pod_ready.go:81] duration metric: took 31.02874835s waiting for pod "coredns-66bff467f8-864g6" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.250607   21723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-235631" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.256102   21723 pod_ready.go:92] pod "etcd-ingress-addon-legacy-235631" in "kube-system" namespace has status "Ready":"True"
	I0914 21:51:20.256122   21723 pod_ready.go:81] duration metric: took 5.508895ms waiting for pod "etcd-ingress-addon-legacy-235631" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.256133   21723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-235631" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.260367   21723 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-235631" in "kube-system" namespace has status "Ready":"True"
	I0914 21:51:20.260386   21723 pod_ready.go:81] duration metric: took 4.246804ms waiting for pod "kube-apiserver-ingress-addon-legacy-235631" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.260394   21723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-235631" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.264862   21723 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-235631" in "kube-system" namespace has status "Ready":"True"
	I0914 21:51:20.264878   21723 pod_ready.go:81] duration metric: took 4.478444ms waiting for pod "kube-controller-manager-ingress-addon-legacy-235631" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.264886   21723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9gptq" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.269829   21723 pod_ready.go:92] pod "kube-proxy-9gptq" in "kube-system" namespace has status "Ready":"True"
	I0914 21:51:20.269843   21723 pod_ready.go:81] duration metric: took 4.952588ms waiting for pod "kube-proxy-9gptq" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.269851   21723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-235631" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.441226   21723 request.go:629] Waited for 171.308967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-235631
	I0914 21:51:20.641212   21723 request.go:629] Waited for 196.332664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ingress-addon-legacy-235631
	I0914 21:51:20.643987   21723 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-235631" in "kube-system" namespace has status "Ready":"True"
	I0914 21:51:20.644006   21723 pod_ready.go:81] duration metric: took 374.14916ms waiting for pod "kube-scheduler-ingress-addon-legacy-235631" in "kube-system" namespace to be "Ready" ...
	I0914 21:51:20.644014   21723 pod_ready.go:38] duration metric: took 32.449308273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 21:51:20.644028   21723 api_server.go:52] waiting for apiserver process to appear ...
	I0914 21:51:20.644067   21723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 21:51:20.662852   21723 api_server.go:72] duration metric: took 32.849416093s to wait for apiserver process to appear ...
	I0914 21:51:20.662872   21723 api_server.go:88] waiting for apiserver healthz status ...
	I0914 21:51:20.662891   21723 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0914 21:51:20.670389   21723 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0914 21:51:20.671203   21723 api_server.go:141] control plane version: v1.18.20
	I0914 21:51:20.671220   21723 api_server.go:131] duration metric: took 8.341989ms to wait for apiserver health ...
	I0914 21:51:20.671227   21723 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 21:51:20.840481   21723 request.go:629] Waited for 169.203472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I0914 21:51:20.856083   21723 system_pods.go:59] 7 kube-system pods found
	I0914 21:51:20.856114   21723 system_pods.go:61] "coredns-66bff467f8-864g6" [81212f97-935e-43b5-9408-0b2b85aca263] Running
	I0914 21:51:20.856138   21723 system_pods.go:61] "etcd-ingress-addon-legacy-235631" [a8a12eb3-ca6c-4141-9fb8-4261a25c0812] Running
	I0914 21:51:20.856144   21723 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-235631" [8c80f063-87de-4ed4-a97e-48f0cc3f74dd] Running
	I0914 21:51:20.856152   21723 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-235631" [39baae32-aacd-4d01-8a14-20f2be12e4c0] Running
	I0914 21:51:20.856159   21723 system_pods.go:61] "kube-proxy-9gptq" [dd32de81-2658-4179-a336-7a88d16bc10c] Running
	I0914 21:51:20.856169   21723 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-235631" [78c4ca8d-8006-4fb9-a2e5-3309a5965ce3] Running
	I0914 21:51:20.856176   21723 system_pods.go:61] "storage-provisioner" [27542fa5-b78c-479b-9271-fdfaab3c5ffe] Running
	I0914 21:51:20.856187   21723 system_pods.go:74] duration metric: took 184.953806ms to wait for pod list to return data ...
	I0914 21:51:20.856202   21723 default_sa.go:34] waiting for default service account to be created ...
	I0914 21:51:21.040566   21723 request.go:629] Waited for 184.277284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I0914 21:51:21.043510   21723 default_sa.go:45] found service account: "default"
	I0914 21:51:21.043531   21723 default_sa.go:55] duration metric: took 187.322272ms for default service account to be created ...
	I0914 21:51:21.043540   21723 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 21:51:21.241044   21723 request.go:629] Waited for 197.443965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I0914 21:51:21.246821   21723 system_pods.go:86] 7 kube-system pods found
	I0914 21:51:21.246846   21723 system_pods.go:89] "coredns-66bff467f8-864g6" [81212f97-935e-43b5-9408-0b2b85aca263] Running
	I0914 21:51:21.246854   21723 system_pods.go:89] "etcd-ingress-addon-legacy-235631" [a8a12eb3-ca6c-4141-9fb8-4261a25c0812] Running
	I0914 21:51:21.246860   21723 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-235631" [8c80f063-87de-4ed4-a97e-48f0cc3f74dd] Running
	I0914 21:51:21.246866   21723 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-235631" [39baae32-aacd-4d01-8a14-20f2be12e4c0] Running
	I0914 21:51:21.246872   21723 system_pods.go:89] "kube-proxy-9gptq" [dd32de81-2658-4179-a336-7a88d16bc10c] Running
	I0914 21:51:21.246878   21723 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-235631" [78c4ca8d-8006-4fb9-a2e5-3309a5965ce3] Running
	I0914 21:51:21.246892   21723 system_pods.go:89] "storage-provisioner" [27542fa5-b78c-479b-9271-fdfaab3c5ffe] Running
	I0914 21:51:21.246902   21723 system_pods.go:126] duration metric: took 203.354834ms to wait for k8s-apps to be running ...
	I0914 21:51:21.246918   21723 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 21:51:21.246964   21723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 21:51:21.259106   21723 system_svc.go:56] duration metric: took 12.173581ms WaitForService to wait for kubelet.
	I0914 21:51:21.259131   21723 kubeadm.go:581] duration metric: took 33.44569948s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 21:51:21.259151   21723 node_conditions.go:102] verifying NodePressure condition ...
	I0914 21:51:21.440487   21723 request.go:629] Waited for 181.256663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I0914 21:51:21.443725   21723 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 21:51:21.443751   21723 node_conditions.go:123] node cpu capacity is 2
	I0914 21:51:21.443761   21723 node_conditions.go:105] duration metric: took 184.604632ms to run NodePressure ...
	I0914 21:51:21.443771   21723 start.go:228] waiting for startup goroutines ...
	I0914 21:51:21.443776   21723 start.go:233] waiting for cluster config update ...
	I0914 21:51:21.443785   21723 start.go:242] writing updated cluster config ...
	I0914 21:51:21.444037   21723 ssh_runner.go:195] Run: rm -f paused
	I0914 21:51:21.491173   21723 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I0914 21:51:21.493052   21723 out.go:177] 
	W0914 21:51:21.494461   21723 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0914 21:51:21.495706   21723 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0914 21:51:21.497117   21723 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-235631" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 21:49:57 UTC, ends at Thu 2023-09-14 21:54:31 UTC. --
	Sep 14 21:54:30 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:30.885629500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=82207bb6-cfa5-473d-ab21-7be30188c182 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:30 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:30.886113113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed9ad5db91adca3c1270a677b69008040d66b2c83216e4a8a1262b86dde0435,PodSandboxId:2711f3fecc38bd34ba921debfb1776023fbc845c8b3d5af3e2813858eddb97f2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694728452797144425,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-fh8wp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b90e77ee-ad98-4bb9-831e-82e11998180b,},Annotations:map[string]string{io.kubernetes.container.hash: ac86b15a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05711e58d5d8b6a8d1c7fd0062f7300252d346feb7d16d478c0296d065656bf7,PodSandboxId:70a35ac2ee734cf9956ea6b974a7af219f8826f40228a63c6c5692f3fba8d376,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694728310566941984,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6f0e50f-6590-4f9a-9bf3-1aec235e7747,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: be893efe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59402151eaed429f3e6d92472e3c1431f11e7fdfdc4c233103c679900479b465,PodSandboxId:7c3f34a0f5d6acd16ac5ac9f234a4377e083966642d6ed29a21c150d554397b5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694728294691047506,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-lp9rk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e29e9775-5307-46cd-8534-8255f7f7739d,},Annotations:map[string]string{io.kubernetes.container.hash: 89af7b79,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:74354c88c9816874b060b5f7f4f5a1c5c2eda931a2b303d5bd46becad74980b6,PodSandboxId:cadf65eed398c8f6169abe36aa925cb63e27164e37d367253b8bf9d4bc0a8bee,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728287655828129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4nq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f102b786-6bb7-4f6e-a917-e4abb3a6a7fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1fbf31fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfeef3c2a85945a6c7e8bc3e99b5d8c630cd90faacceb0cea8e027b9d68b63b4,PodSandboxId:52a859aea5511a13c0e08dc92317c905ce8bc1ef269b6faa4229e73cb321dbcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728286613894702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwzf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3163bbc1-6d21-4958-b383-ea53f938f5d3,},Annotations:map[string]string{io.kubernetes.container.hash: f9b53adc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f879b61bcbc51668ce1a2063526147b469d618c594a075a16e3193e5bef666,PodSandboxId:49d2e5b476f5a74944268afc535265eb57b21323e7a25c2afc376df06e6d4210,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728249551656386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27542fa5-b78c-479b-9271-fdfaab3c5ffe,},Annotations:map[string]string{io.kubernetes.container.hash: 9b591832,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f89b2839d1bea2f2a9247ec62c4872882c93a59270b920573e0ff9ab9b0d0a,PodSandboxId:52bc791e135d950b24f9b566aff967507d1b80f9450b444b3c59995d88c9b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694728249155502438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd32de81-2658-4179-a336-7a88d16bc10c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e566d5a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc44e13d5448149dd21ef2920049f815a7df36b348f46a044d8c727e729e6295,PodSandboxId:043b474802ba12738390567443ca39d0850c8ea50ccdc8b61657aedfc83475fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694728248304589139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-864g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81212f97-935e-43b5-9408-0b2b85aca263,},Annotations:map[string]string{io.kubernetes.container.hash: f02fa868,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64df88c2188656a255b7d102c377d3707995d164fb7b76dbf9a694fe2590e5c2,Pod
SandboxId:067db534da579b8e3a701b7b134403b694497d3fe7212827fcc6b8a4dbd6bf01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694728225054379537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2c4ac7d8d5f9c5675291c7e57c2c14,},Annotations:map[string]string{io.kubernetes.container.hash: 96694183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b188172297bcdbc23b44db211fc04ba87eba03fe4211e68e080e95f76bdf7b,PodSandboxId:3bcf514da2130dd20a9fbb063aed1300986b
43fe79b7c3988bdc5295517f6387,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694728224130330425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8342edb3dc3b3b4de73b04c0040dd1b969e3a8fa23d1675f1144ccff61d819,PodSandboxId:d16b9c37236913faf2408548d927d1d1fc571eb2ed
8c86129f31c23a050e2d2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694728223530025522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66072c571e884888f6ed1cf7665e8a84d20e2a63b5e9cfe5a021f3cc2104ec58,PodSandboxId:6cf6a1264d2e
8a9aa42e8aa7912f42406c69e7d2b37c0e27949fc6242989b9db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694728223504061405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=82207bb6-cfa5-473d-ab21-7be30188c182 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Sep 14 21:54:30 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:30.932224493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8ca6af3e-78f3-40d6-9f8c-b794bd1398c3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:30 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:30.932312294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8ca6af3e-78f3-40d6-9f8c-b794bd1398c3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:30 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:30.932580668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed9ad5db91adca3c1270a677b69008040d66b2c83216e4a8a1262b86dde0435,PodSandboxId:2711f3fecc38bd34ba921debfb1776023fbc845c8b3d5af3e2813858eddb97f2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694728452797144425,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-fh8wp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b90e77ee-ad98-4bb9-831e-82e11998180b,},Annotations:map[string]string{io.kubernetes.container.hash: ac86b15a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05711e58d5d8b6a8d1c7fd0062f7300252d346feb7d16d478c0296d065656bf7,PodSandboxId:70a35ac2ee734cf9956ea6b974a7af219f8826f40228a63c6c5692f3fba8d376,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694728310566941984,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6f0e50f-6590-4f9a-9bf3-1aec235e7747,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: be893efe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59402151eaed429f3e6d92472e3c1431f11e7fdfdc4c233103c679900479b465,PodSandboxId:7c3f34a0f5d6acd16ac5ac9f234a4377e083966642d6ed29a21c150d554397b5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694728294691047506,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-lp9rk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e29e9775-5307-46cd-8534-8255f7f7739d,},Annotations:map[string]string{io.kubernetes.container.hash: 89af7b79,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:74354c88c9816874b060b5f7f4f5a1c5c2eda931a2b303d5bd46becad74980b6,PodSandboxId:cadf65eed398c8f6169abe36aa925cb63e27164e37d367253b8bf9d4bc0a8bee,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728287655828129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4nq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f102b786-6bb7-4f6e-a917-e4abb3a6a7fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1fbf31fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfeef3c2a85945a6c7e8bc3e99b5d8c630cd90faacceb0cea8e027b9d68b63b4,PodSandboxId:52a859aea5511a13c0e08dc92317c905ce8bc1ef269b6faa4229e73cb321dbcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728286613894702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwzf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3163bbc1-6d21-4958-b383-ea53f938f5d3,},Annotations:map[string]string{io.kubernetes.container.hash: f9b53adc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f879b61bcbc51668ce1a2063526147b469d618c594a075a16e3193e5bef666,PodSandboxId:49d2e5b476f5a74944268afc535265eb57b21323e7a25c2afc376df06e6d4210,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728249551656386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27542fa5-b78c-479b-9271-fdfaab3c5ffe,},Annotations:map[string]string{io.kubernetes.container.hash: 9b591832,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f89b2839d1bea2f2a9247ec62c4872882c93a59270b920573e0ff9ab9b0d0a,PodSandboxId:52bc791e135d950b24f9b566aff967507d1b80f9450b444b3c59995d88c9b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694728249155502438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd32de81-2658-4179-a336-7a88d16bc10c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e566d5a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc44e13d5448149dd21ef2920049f815a7df36b348f46a044d8c727e729e6295,PodSandboxId:043b474802ba12738390567443ca39d0850c8ea50ccdc8b61657aedfc83475fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694728248304589139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-864g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81212f97-935e-43b5-9408-0b2b85aca263,},Annotations:map[string]string{io.kubernetes.container.hash: f02fa868,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64df88c2188656a255b7d102c377d3707995d164fb7b76dbf9a694fe2590e5c2,Pod
SandboxId:067db534da579b8e3a701b7b134403b694497d3fe7212827fcc6b8a4dbd6bf01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694728225054379537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2c4ac7d8d5f9c5675291c7e57c2c14,},Annotations:map[string]string{io.kubernetes.container.hash: 96694183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b188172297bcdbc23b44db211fc04ba87eba03fe4211e68e080e95f76bdf7b,PodSandboxId:3bcf514da2130dd20a9fbb063aed1300986b
43fe79b7c3988bdc5295517f6387,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694728224130330425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8342edb3dc3b3b4de73b04c0040dd1b969e3a8fa23d1675f1144ccff61d819,PodSandboxId:d16b9c37236913faf2408548d927d1d1fc571eb2ed
8c86129f31c23a050e2d2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694728223530025522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66072c571e884888f6ed1cf7665e8a84d20e2a63b5e9cfe5a021f3cc2104ec58,PodSandboxId:6cf6a1264d2e
8a9aa42e8aa7912f42406c69e7d2b37c0e27949fc6242989b9db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694728223504061405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8ca6af3e-78f3-40d6-9f8c-b794bd1398c3 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Sep 14 21:54:30 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:30.970210176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5fcf4d33-4694-4b83-ace5-c4a3627f5c3e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:30 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:30.970327361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5fcf4d33-4694-4b83-ace5-c4a3627f5c3e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:30 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:30.970660766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed9ad5db91adca3c1270a677b69008040d66b2c83216e4a8a1262b86dde0435,PodSandboxId:2711f3fecc38bd34ba921debfb1776023fbc845c8b3d5af3e2813858eddb97f2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694728452797144425,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-fh8wp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b90e77ee-ad98-4bb9-831e-82e11998180b,},Annotations:map[string]string{io.kubernetes.container.hash: ac86b15a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05711e58d5d8b6a8d1c7fd0062f7300252d346feb7d16d478c0296d065656bf7,PodSandboxId:70a35ac2ee734cf9956ea6b974a7af219f8826f40228a63c6c5692f3fba8d376,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694728310566941984,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6f0e50f-6590-4f9a-9bf3-1aec235e7747,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: be893efe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59402151eaed429f3e6d92472e3c1431f11e7fdfdc4c233103c679900479b465,PodSandboxId:7c3f34a0f5d6acd16ac5ac9f234a4377e083966642d6ed29a21c150d554397b5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694728294691047506,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-lp9rk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e29e9775-5307-46cd-8534-8255f7f7739d,},Annotations:map[string]string{io.kubernetes.container.hash: 89af7b79,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:74354c88c9816874b060b5f7f4f5a1c5c2eda931a2b303d5bd46becad74980b6,PodSandboxId:cadf65eed398c8f6169abe36aa925cb63e27164e37d367253b8bf9d4bc0a8bee,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728287655828129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4nq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f102b786-6bb7-4f6e-a917-e4abb3a6a7fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1fbf31fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfeef3c2a85945a6c7e8bc3e99b5d8c630cd90faacceb0cea8e027b9d68b63b4,PodSandboxId:52a859aea5511a13c0e08dc92317c905ce8bc1ef269b6faa4229e73cb321dbcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728286613894702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwzf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3163bbc1-6d21-4958-b383-ea53f938f5d3,},Annotations:map[string]string{io.kubernetes.container.hash: f9b53adc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f879b61bcbc51668ce1a2063526147b469d618c594a075a16e3193e5bef666,PodSandboxId:49d2e5b476f5a74944268afc535265eb57b21323e7a25c2afc376df06e6d4210,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728249551656386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27542fa5-b78c-479b-9271-fdfaab3c5ffe,},Annotations:map[string]string{io.kubernetes.container.hash: 9b591832,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f89b2839d1bea2f2a9247ec62c4872882c93a59270b920573e0ff9ab9b0d0a,PodSandboxId:52bc791e135d950b24f9b566aff967507d1b80f9450b444b3c59995d88c9b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694728249155502438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd32de81-2658-4179-a336-7a88d16bc10c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e566d5a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc44e13d5448149dd21ef2920049f815a7df36b348f46a044d8c727e729e6295,PodSandboxId:043b474802ba12738390567443ca39d0850c8ea50ccdc8b61657aedfc83475fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694728248304589139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-864g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81212f97-935e-43b5-9408-0b2b85aca263,},Annotations:map[string]string{io.kubernetes.container.hash: f02fa868,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64df88c2188656a255b7d102c377d3707995d164fb7b76dbf9a694fe2590e5c2,Pod
SandboxId:067db534da579b8e3a701b7b134403b694497d3fe7212827fcc6b8a4dbd6bf01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694728225054379537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2c4ac7d8d5f9c5675291c7e57c2c14,},Annotations:map[string]string{io.kubernetes.container.hash: 96694183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b188172297bcdbc23b44db211fc04ba87eba03fe4211e68e080e95f76bdf7b,PodSandboxId:3bcf514da2130dd20a9fbb063aed1300986b
43fe79b7c3988bdc5295517f6387,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694728224130330425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8342edb3dc3b3b4de73b04c0040dd1b969e3a8fa23d1675f1144ccff61d819,PodSandboxId:d16b9c37236913faf2408548d927d1d1fc571eb2ed
8c86129f31c23a050e2d2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694728223530025522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66072c571e884888f6ed1cf7665e8a84d20e2a63b5e9cfe5a021f3cc2104ec58,PodSandboxId:6cf6a1264d2e
8a9aa42e8aa7912f42406c69e7d2b37c0e27949fc6242989b9db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694728223504061405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5fcf4d33-4694-4b83-ace5-c4a3627f5c3e name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.009692249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=356c91cb-79c3-49ee-ada8-313481eb174e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.009866597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=356c91cb-79c3-49ee-ada8-313481eb174e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.010251654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed9ad5db91adca3c1270a677b69008040d66b2c83216e4a8a1262b86dde0435,PodSandboxId:2711f3fecc38bd34ba921debfb1776023fbc845c8b3d5af3e2813858eddb97f2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694728452797144425,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-fh8wp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b90e77ee-ad98-4bb9-831e-82e11998180b,},Annotations:map[string]string{io.kubernetes.container.hash: ac86b15a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05711e58d5d8b6a8d1c7fd0062f7300252d346feb7d16d478c0296d065656bf7,PodSandboxId:70a35ac2ee734cf9956ea6b974a7af219f8826f40228a63c6c5692f3fba8d376,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694728310566941984,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6f0e50f-6590-4f9a-9bf3-1aec235e7747,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: be893efe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59402151eaed429f3e6d92472e3c1431f11e7fdfdc4c233103c679900479b465,PodSandboxId:7c3f34a0f5d6acd16ac5ac9f234a4377e083966642d6ed29a21c150d554397b5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694728294691047506,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-lp9rk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e29e9775-5307-46cd-8534-8255f7f7739d,},Annotations:map[string]string{io.kubernetes.container.hash: 89af7b79,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:74354c88c9816874b060b5f7f4f5a1c5c2eda931a2b303d5bd46becad74980b6,PodSandboxId:cadf65eed398c8f6169abe36aa925cb63e27164e37d367253b8bf9d4bc0a8bee,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728287655828129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4nq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f102b786-6bb7-4f6e-a917-e4abb3a6a7fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1fbf31fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfeef3c2a85945a6c7e8bc3e99b5d8c630cd90faacceb0cea8e027b9d68b63b4,PodSandboxId:52a859aea5511a13c0e08dc92317c905ce8bc1ef269b6faa4229e73cb321dbcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728286613894702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwzf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3163bbc1-6d21-4958-b383-ea53f938f5d3,},Annotations:map[string]string{io.kubernetes.container.hash: f9b53adc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f879b61bcbc51668ce1a2063526147b469d618c594a075a16e3193e5bef666,PodSandboxId:49d2e5b476f5a74944268afc535265eb57b21323e7a25c2afc376df06e6d4210,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728249551656386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27542fa5-b78c-479b-9271-fdfaab3c5ffe,},Annotations:map[string]string{io.kubernetes.container.hash: 9b591832,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f89b2839d1bea2f2a9247ec62c4872882c93a59270b920573e0ff9ab9b0d0a,PodSandboxId:52bc791e135d950b24f9b566aff967507d1b80f9450b444b3c59995d88c9b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694728249155502438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd32de81-2658-4179-a336-7a88d16bc10c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e566d5a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc44e13d5448149dd21ef2920049f815a7df36b348f46a044d8c727e729e6295,PodSandboxId:043b474802ba12738390567443ca39d0850c8ea50ccdc8b61657aedfc83475fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694728248304589139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-864g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81212f97-935e-43b5-9408-0b2b85aca263,},Annotations:map[string]string{io.kubernetes.container.hash: f02fa868,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64df88c2188656a255b7d102c377d3707995d164fb7b76dbf9a694fe2590e5c2,Pod
SandboxId:067db534da579b8e3a701b7b134403b694497d3fe7212827fcc6b8a4dbd6bf01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694728225054379537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2c4ac7d8d5f9c5675291c7e57c2c14,},Annotations:map[string]string{io.kubernetes.container.hash: 96694183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b188172297bcdbc23b44db211fc04ba87eba03fe4211e68e080e95f76bdf7b,PodSandboxId:3bcf514da2130dd20a9fbb063aed1300986b
43fe79b7c3988bdc5295517f6387,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694728224130330425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8342edb3dc3b3b4de73b04c0040dd1b969e3a8fa23d1675f1144ccff61d819,PodSandboxId:d16b9c37236913faf2408548d927d1d1fc571eb2ed
8c86129f31c23a050e2d2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694728223530025522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66072c571e884888f6ed1cf7665e8a84d20e2a63b5e9cfe5a021f3cc2104ec58,PodSandboxId:6cf6a1264d2e
8a9aa42e8aa7912f42406c69e7d2b37c0e27949fc6242989b9db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694728223504061405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=356c91cb-79c3-49ee-ada8-313481eb174e name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.047878528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1637d00e-72a5-483e-ad19-1c86b1ec958c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.047997513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1637d00e-72a5-483e-ad19-1c86b1ec958c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.048476154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed9ad5db91adca3c1270a677b69008040d66b2c83216e4a8a1262b86dde0435,PodSandboxId:2711f3fecc38bd34ba921debfb1776023fbc845c8b3d5af3e2813858eddb97f2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694728452797144425,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-fh8wp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b90e77ee-ad98-4bb9-831e-82e11998180b,},Annotations:map[string]string{io.kubernetes.container.hash: ac86b15a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05711e58d5d8b6a8d1c7fd0062f7300252d346feb7d16d478c0296d065656bf7,PodSandboxId:70a35ac2ee734cf9956ea6b974a7af219f8826f40228a63c6c5692f3fba8d376,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694728310566941984,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6f0e50f-6590-4f9a-9bf3-1aec235e7747,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: be893efe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59402151eaed429f3e6d92472e3c1431f11e7fdfdc4c233103c679900479b465,PodSandboxId:7c3f34a0f5d6acd16ac5ac9f234a4377e083966642d6ed29a21c150d554397b5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694728294691047506,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-lp9rk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e29e9775-5307-46cd-8534-8255f7f7739d,},Annotations:map[string]string{io.kubernetes.container.hash: 89af7b79,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:74354c88c9816874b060b5f7f4f5a1c5c2eda931a2b303d5bd46becad74980b6,PodSandboxId:cadf65eed398c8f6169abe36aa925cb63e27164e37d367253b8bf9d4bc0a8bee,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728287655828129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4nq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f102b786-6bb7-4f6e-a917-e4abb3a6a7fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1fbf31fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfeef3c2a85945a6c7e8bc3e99b5d8c630cd90faacceb0cea8e027b9d68b63b4,PodSandboxId:52a859aea5511a13c0e08dc92317c905ce8bc1ef269b6faa4229e73cb321dbcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728286613894702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwzf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3163bbc1-6d21-4958-b383-ea53f938f5d3,},Annotations:map[string]string{io.kubernetes.container.hash: f9b53adc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f879b61bcbc51668ce1a2063526147b469d618c594a075a16e3193e5bef666,PodSandboxId:49d2e5b476f5a74944268afc535265eb57b21323e7a25c2afc376df06e6d4210,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728249551656386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27542fa5-b78c-479b-9271-fdfaab3c5ffe,},Annotations:map[string]string{io.kubernetes.container.hash: 9b591832,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f89b2839d1bea2f2a9247ec62c4872882c93a59270b920573e0ff9ab9b0d0a,PodSandboxId:52bc791e135d950b24f9b566aff967507d1b80f9450b444b3c59995d88c9b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694728249155502438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd32de81-2658-4179-a336-7a88d16bc10c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e566d5a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc44e13d5448149dd21ef2920049f815a7df36b348f46a044d8c727e729e6295,PodSandboxId:043b474802ba12738390567443ca39d0850c8ea50ccdc8b61657aedfc83475fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694728248304589139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-864g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81212f97-935e-43b5-9408-0b2b85aca263,},Annotations:map[string]string{io.kubernetes.container.hash: f02fa868,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64df88c2188656a255b7d102c377d3707995d164fb7b76dbf9a694fe2590e5c2,Pod
SandboxId:067db534da579b8e3a701b7b134403b694497d3fe7212827fcc6b8a4dbd6bf01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694728225054379537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2c4ac7d8d5f9c5675291c7e57c2c14,},Annotations:map[string]string{io.kubernetes.container.hash: 96694183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b188172297bcdbc23b44db211fc04ba87eba03fe4211e68e080e95f76bdf7b,PodSandboxId:3bcf514da2130dd20a9fbb063aed1300986b
43fe79b7c3988bdc5295517f6387,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694728224130330425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8342edb3dc3b3b4de73b04c0040dd1b969e3a8fa23d1675f1144ccff61d819,PodSandboxId:d16b9c37236913faf2408548d927d1d1fc571eb2ed
8c86129f31c23a050e2d2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694728223530025522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66072c571e884888f6ed1cf7665e8a84d20e2a63b5e9cfe5a021f3cc2104ec58,PodSandboxId:6cf6a1264d2e
8a9aa42e8aa7912f42406c69e7d2b37c0e27949fc6242989b9db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694728223504061405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1637d00e-72a5-483e-ad19-1c86b1ec958c name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.083571381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=44da45d6-ef85-4885-a9e2-5007ce736c09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.083683028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=44da45d6-ef85-4885-a9e2-5007ce736c09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.084091225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed9ad5db91adca3c1270a677b69008040d66b2c83216e4a8a1262b86dde0435,PodSandboxId:2711f3fecc38bd34ba921debfb1776023fbc845c8b3d5af3e2813858eddb97f2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694728452797144425,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-fh8wp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b90e77ee-ad98-4bb9-831e-82e11998180b,},Annotations:map[string]string{io.kubernetes.container.hash: ac86b15a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05711e58d5d8b6a8d1c7fd0062f7300252d346feb7d16d478c0296d065656bf7,PodSandboxId:70a35ac2ee734cf9956ea6b974a7af219f8826f40228a63c6c5692f3fba8d376,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694728310566941984,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6f0e50f-6590-4f9a-9bf3-1aec235e7747,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: be893efe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59402151eaed429f3e6d92472e3c1431f11e7fdfdc4c233103c679900479b465,PodSandboxId:7c3f34a0f5d6acd16ac5ac9f234a4377e083966642d6ed29a21c150d554397b5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694728294691047506,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-lp9rk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e29e9775-5307-46cd-8534-8255f7f7739d,},Annotations:map[string]string{io.kubernetes.container.hash: 89af7b79,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:74354c88c9816874b060b5f7f4f5a1c5c2eda931a2b303d5bd46becad74980b6,PodSandboxId:cadf65eed398c8f6169abe36aa925cb63e27164e37d367253b8bf9d4bc0a8bee,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728287655828129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4nq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f102b786-6bb7-4f6e-a917-e4abb3a6a7fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1fbf31fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfeef3c2a85945a6c7e8bc3e99b5d8c630cd90faacceb0cea8e027b9d68b63b4,PodSandboxId:52a859aea5511a13c0e08dc92317c905ce8bc1ef269b6faa4229e73cb321dbcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728286613894702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwzf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3163bbc1-6d21-4958-b383-ea53f938f5d3,},Annotations:map[string]string{io.kubernetes.container.hash: f9b53adc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f879b61bcbc51668ce1a2063526147b469d618c594a075a16e3193e5bef666,PodSandboxId:49d2e5b476f5a74944268afc535265eb57b21323e7a25c2afc376df06e6d4210,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728249551656386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27542fa5-b78c-479b-9271-fdfaab3c5ffe,},Annotations:map[string]string{io.kubernetes.container.hash: 9b591832,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f89b2839d1bea2f2a9247ec62c4872882c93a59270b920573e0ff9ab9b0d0a,PodSandboxId:52bc791e135d950b24f9b566aff967507d1b80f9450b444b3c59995d88c9b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694728249155502438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd32de81-2658-4179-a336-7a88d16bc10c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e566d5a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc44e13d5448149dd21ef2920049f815a7df36b348f46a044d8c727e729e6295,PodSandboxId:043b474802ba12738390567443ca39d0850c8ea50ccdc8b61657aedfc83475fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694728248304589139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-864g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81212f97-935e-43b5-9408-0b2b85aca263,},Annotations:map[string]string{io.kubernetes.container.hash: f02fa868,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64df88c2188656a255b7d102c377d3707995d164fb7b76dbf9a694fe2590e5c2,Pod
SandboxId:067db534da579b8e3a701b7b134403b694497d3fe7212827fcc6b8a4dbd6bf01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694728225054379537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2c4ac7d8d5f9c5675291c7e57c2c14,},Annotations:map[string]string{io.kubernetes.container.hash: 96694183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b188172297bcdbc23b44db211fc04ba87eba03fe4211e68e080e95f76bdf7b,PodSandboxId:3bcf514da2130dd20a9fbb063aed1300986b
43fe79b7c3988bdc5295517f6387,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694728224130330425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8342edb3dc3b3b4de73b04c0040dd1b969e3a8fa23d1675f1144ccff61d819,PodSandboxId:d16b9c37236913faf2408548d927d1d1fc571eb2ed
8c86129f31c23a050e2d2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694728223530025522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66072c571e884888f6ed1cf7665e8a84d20e2a63b5e9cfe5a021f3cc2104ec58,PodSandboxId:6cf6a1264d2e
8a9aa42e8aa7912f42406c69e7d2b37c0e27949fc6242989b9db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694728223504061405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=44da45d6-ef85-4885-a9e2-5007ce736c09 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.088424793Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=8d5edc90-7494-4a6a-98fb-29442660c6a6 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.088926083Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2711f3fecc38bd34ba921debfb1776023fbc845c8b3d5af3e2813858eddb97f2,Metadata:&PodSandboxMetadata{Name:hello-world-app-5f5d8b66bb-fh8wp,Uid:b90e77ee-ad98-4bb9-831e-82e11998180b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728449544934260,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-fh8wp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b90e77ee-ad98-4bb9-831e-82e11998180b,pod-template-hash: 5f5d8b66bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T21:54:09.196589566Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:70a35ac2ee734cf9956ea6b974a7af219f8826f40228a63c6c5692f3fba8d376,Metadata:&PodSandboxMetadata{Name:nginx,Uid:f6f0e50f-6590-4f9a-9bf3-1aec235e7747,Namespace:defau
lt,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728306033700349,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6f0e50f-6590-4f9a-9bf3-1aec235e7747,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T21:51:45.695535515Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:57855f348a13f8282f09114cc5752e4698a69d375daf2aea38e5d372a2e96f81,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:88d1cc1b-09b8-448c-a113-a3d0b7ac8e53,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1694728297700749270,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d1cc1b-09b8-448c-a113-a3d0b7ac8e53,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configura
tion: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2023-09-14T21:51:36.458410265Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7c3f34a0f5d6acd16ac5ac9f234a4377e083966642d6ed29a21c150d554397b5,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7fcf777cb7-lp9rk,Uid:e29e9775-5307-46cd-8534
-8255f7f7739d,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1694728287132391930,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-lp9rk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e29e9775-5307-46cd-8534-8255f7f7739d,pod-template-hash: 7fcf777cb7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T21:51:22.286839855Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cadf65eed398c8f6169abe36aa925cb63e27164e37d367253b8bf9d4bc0a8bee,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-tl4nq,Uid:f102b786-6bb7-4f6e-a917-e4abb3a6a7fc,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1694728282713326760,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/ins
tance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 30b10a21-d828-46a6-ae26-a35622349b23,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4nq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f102b786-6bb7-4f6e-a917-e4abb3a6a7fc,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T21:51:22.368484189Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52a859aea5511a13c0e08dc92317c905ce8bc1ef269b6faa4229e73cb321dbcb,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-jwzf6,Uid:3163bbc1-6d21-4958-b383-ea53f938f5d3,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1694728282657137471,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 1c717594-e264-4829-8bf0-ece6cc6ba8f7,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: ingress-nginx-admission-create-jwzf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3163bbc1-6d21-4958-b383-ea53f938f5d3,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T21:51:22.313974464Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:49d2e5b476f5a74944268afc535265eb57b21323e7a25c2afc376df06e6d4210,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:27542fa5-b78c-479b-9271-fdfaab3c5ffe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728249187733478,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27542fa5-b78c-479b-9271-fdfaab3c5ffe,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annota
tions\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-14T21:50:48.828804882Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:043b474802ba12738390567443ca39d0850c8ea50ccdc8b61657aedfc83475fa,Metadata:&PodSandboxMetadata{Name:coredns-66bff467f8-864g6,Uid:81212f97-935e-43b5-9408-0b2b85aca263,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728247827748047,Labels:map[string]string{io.kubernetes.container.
name: POD,io.kubernetes.pod.name: coredns-66bff467f8-864g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81212f97-935e-43b5-9408-0b2b85aca263,k8s-app: kube-dns,pod-template-hash: 66bff467f8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T21:50:47.470105104Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52bc791e135d950b24f9b566aff967507d1b80f9450b444b3c59995d88c9b738,Metadata:&PodSandboxMetadata{Name:kube-proxy-9gptq,Uid:dd32de81-2658-4179-a336-7a88d16bc10c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728247701969427,Labels:map[string]string{controller-revision-hash: 5bdc57b48f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9gptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd32de81-2658-4179-a336-7a88d16bc10c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T21:50:47.356460195Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:6cf6a1264d2e8a9aa42e8aa7912f42406c69e7d2b37c0e27949fc6242989b9db,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ingress-addon-legacy-235631,Uid:3422537e6e2a79365d4f294fe67c4d19,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728223116389148,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.250:8443,kubernetes.io/config.hash: 3422537e6e2a79365d4f294fe67c4d19,kubernetes.io/config.seen: 2023-09-14T21:50:22.045628719Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:067db534da579b8e3a701b7b134403b694497d3fe7212827fcc6b8a4dbd6bf01,Metadata:&PodSandboxMetadata{Name:etcd-ingress-addon-legacy-235631,Uid:ab2c4ac7d8d5f9c5675291c7e5
7c2c14,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728223080018857,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2c4ac7d8d5f9c5675291c7e57c2c14,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.250:2379,kubernetes.io/config.hash: ab2c4ac7d8d5f9c5675291c7e57c2c14,kubernetes.io/config.seen: 2023-09-14T21:50:22.051387197Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d16b9c37236913faf2408548d927d1d1fc571eb2ed8c86129f31c23a050e2d2d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ingress-addon-legacy-235631,Uid:b395a1e17534e69e27827b1f8d737725,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728223071136876,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.nam
e: kube-controller-manager-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b395a1e17534e69e27827b1f8d737725,kubernetes.io/config.seen: 2023-09-14T21:50:22.047574596Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3bcf514da2130dd20a9fbb063aed1300986b43fe79b7c3988bdc5295517f6387,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ingress-addon-legacy-235631,Uid:d12e497b0008e22acbcd5a9cf2dd48ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728223066953545,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d12e497b0008e22acbcd5a9cf2dd48ac,kubernete
s.io/config.seen: 2023-09-14T21:50:22.049230336Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=8d5edc90-7494-4a6a-98fb-29442660c6a6 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.090142154Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3e40d546-62c6-477c-948e-583a009359aa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.090232541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3e40d546-62c6-477c-948e-583a009359aa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.090616048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed9ad5db91adca3c1270a677b69008040d66b2c83216e4a8a1262b86dde0435,PodSandboxId:2711f3fecc38bd34ba921debfb1776023fbc845c8b3d5af3e2813858eddb97f2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694728452797144425,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-fh8wp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b90e77ee-ad98-4bb9-831e-82e11998180b,},Annotations:map[string]string{io.kubernetes.container.hash: ac86b15a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05711e58d5d8b6a8d1c7fd0062f7300252d346feb7d16d478c0296d065656bf7,PodSandboxId:70a35ac2ee734cf9956ea6b974a7af219f8826f40228a63c6c5692f3fba8d376,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694728310566941984,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6f0e50f-6590-4f9a-9bf3-1aec235e7747,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: be893efe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59402151eaed429f3e6d92472e3c1431f11e7fdfdc4c233103c679900479b465,PodSandboxId:7c3f34a0f5d6acd16ac5ac9f234a4377e083966642d6ed29a21c150d554397b5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694728294691047506,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-lp9rk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e29e9775-5307-46cd-8534-8255f7f7739d,},Annotations:map[string]string{io.kubernetes.container.hash: 89af7b79,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:74354c88c9816874b060b5f7f4f5a1c5c2eda931a2b303d5bd46becad74980b6,PodSandboxId:cadf65eed398c8f6169abe36aa925cb63e27164e37d367253b8bf9d4bc0a8bee,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728287655828129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4nq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f102b786-6bb7-4f6e-a917-e4abb3a6a7fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1fbf31fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfeef3c2a85945a6c7e8bc3e99b5d8c630cd90faacceb0cea8e027b9d68b63b4,PodSandboxId:52a859aea5511a13c0e08dc92317c905ce8bc1ef269b6faa4229e73cb321dbcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728286613894702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwzf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3163bbc1-6d21-4958-b383-ea53f938f5d3,},Annotations:map[string]string{io.kubernetes.container.hash: f9b53adc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f879b61bcbc51668ce1a2063526147b469d618c594a075a16e3193e5bef666,PodSandboxId:49d2e5b476f5a74944268afc535265eb57b21323e7a25c2afc376df06e6d4210,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728249551656386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27542fa5-b78c-479b-9271-fdfaab3c5ffe,},Annotations:map[string]string{io.kubernetes.container.hash: 9b591832,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f89b2839d1bea2f2a9247ec62c4872882c93a59270b920573e0ff9ab9b0d0a,PodSandboxId:52bc791e135d950b24f9b566aff967507d1b80f9450b444b3c59995d88c9b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694728249155502438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd32de81-2658-4179-a336-7a88d16bc10c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e566d5a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc44e13d5448149dd21ef2920049f815a7df36b348f46a044d8c727e729e6295,PodSandboxId:043b474802ba12738390567443ca39d0850c8ea50ccdc8b61657aedfc83475fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694728248304589139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-864g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81212f97-935e-43b5-9408-0b2b85aca263,},Annotations:map[string]string{io.kubernetes.container.hash: f02fa868,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64df88c2188656a255b7d102c377d3707995d164fb7b76dbf9a694fe2590e5c2,Pod
SandboxId:067db534da579b8e3a701b7b134403b694497d3fe7212827fcc6b8a4dbd6bf01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694728225054379537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2c4ac7d8d5f9c5675291c7e57c2c14,},Annotations:map[string]string{io.kubernetes.container.hash: 96694183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b188172297bcdbc23b44db211fc04ba87eba03fe4211e68e080e95f76bdf7b,PodSandboxId:3bcf514da2130dd20a9fbb063aed1300986b
43fe79b7c3988bdc5295517f6387,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694728224130330425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8342edb3dc3b3b4de73b04c0040dd1b969e3a8fa23d1675f1144ccff61d819,PodSandboxId:d16b9c37236913faf2408548d927d1d1fc571eb2ed
8c86129f31c23a050e2d2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694728223530025522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66072c571e884888f6ed1cf7665e8a84d20e2a63b5e9cfe5a021f3cc2104ec58,PodSandboxId:6cf6a1264d2e
8a9aa42e8aa7912f42406c69e7d2b37c0e27949fc6242989b9db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694728223504061405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3e40d546-62c6-477c-948e-583a009359aa name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.118205910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ba5dbc3f-3df7-4292-b918-a0038ccc4243 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.118294416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ba5dbc3f-3df7-4292-b918-a0038ccc4243 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 21:54:31 ingress-addon-legacy-235631 crio[719]: time="2023-09-14 21:54:31.118547057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ed9ad5db91adca3c1270a677b69008040d66b2c83216e4a8a1262b86dde0435,PodSandboxId:2711f3fecc38bd34ba921debfb1776023fbc845c8b3d5af3e2813858eddb97f2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694728452797144425,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-fh8wp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b90e77ee-ad98-4bb9-831e-82e11998180b,},Annotations:map[string]string{io.kubernetes.container.hash: ac86b15a,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05711e58d5d8b6a8d1c7fd0062f7300252d346feb7d16d478c0296d065656bf7,PodSandboxId:70a35ac2ee734cf9956ea6b974a7af219f8826f40228a63c6c5692f3fba8d376,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694728310566941984,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6f0e50f-6590-4f9a-9bf3-1aec235e7747,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: be893efe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59402151eaed429f3e6d92472e3c1431f11e7fdfdc4c233103c679900479b465,PodSandboxId:7c3f34a0f5d6acd16ac5ac9f234a4377e083966642d6ed29a21c150d554397b5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694728294691047506,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-lp9rk,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e29e9775-5307-46cd-8534-8255f7f7739d,},Annotations:map[string]string{io.kubernetes.container.hash: 89af7b79,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:74354c88c9816874b060b5f7f4f5a1c5c2eda931a2b303d5bd46becad74980b6,PodSandboxId:cadf65eed398c8f6169abe36aa925cb63e27164e37d367253b8bf9d4bc0a8bee,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728287655828129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4nq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f102b786-6bb7-4f6e-a917-e4abb3a6a7fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1fbf31fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfeef3c2a85945a6c7e8bc3e99b5d8c630cd90faacceb0cea8e027b9d68b63b4,PodSandboxId:52a859aea5511a13c0e08dc92317c905ce8bc1ef269b6faa4229e73cb321dbcb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694728286613894702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwzf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3163bbc1-6d21-4958-b383-ea53f938f5d3,},Annotations:map[string]string{io.kubernetes.container.hash: f9b53adc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f879b61bcbc51668ce1a2063526147b469d618c594a075a16e3193e5bef666,PodSandboxId:49d2e5b476f5a74944268afc535265eb57b21323e7a25c2afc376df06e6d4210,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728249551656386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27542fa5-b78c-479b-9271-fdfaab3c5ffe,},Annotations:map[string]string{io.kubernetes.container.hash: 9b591832,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f89b2839d1bea2f2a9247ec62c4872882c93a59270b920573e0ff9ab9b0d0a,PodSandboxId:52bc791e135d950b24f9b566aff967507d1b80f9450b444b3c59995d88c9b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694728249155502438,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd32de81-2658-4179-a336-7a88d16bc10c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e566d5a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc44e13d5448149dd21ef2920049f815a7df36b348f46a044d8c727e729e6295,PodSandboxId:043b474802ba12738390567443ca39d0850c8ea50ccdc8b61657aedfc83475fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694728248304589139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-864g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81212f97-935e-43b5-9408-0b2b85aca263,},Annotations:map[string]string{io.kubernetes.container.hash: f02fa868,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64df88c2188656a255b7d102c377d3707995d164fb7b76dbf9a694fe2590e5c2,Pod
SandboxId:067db534da579b8e3a701b7b134403b694497d3fe7212827fcc6b8a4dbd6bf01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694728225054379537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2c4ac7d8d5f9c5675291c7e57c2c14,},Annotations:map[string]string{io.kubernetes.container.hash: 96694183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b188172297bcdbc23b44db211fc04ba87eba03fe4211e68e080e95f76bdf7b,PodSandboxId:3bcf514da2130dd20a9fbb063aed1300986b
43fe79b7c3988bdc5295517f6387,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694728224130330425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8342edb3dc3b3b4de73b04c0040dd1b969e3a8fa23d1675f1144ccff61d819,PodSandboxId:d16b9c37236913faf2408548d927d1d1fc571eb2ed
8c86129f31c23a050e2d2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694728223530025522,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66072c571e884888f6ed1cf7665e8a84d20e2a63b5e9cfe5a021f3cc2104ec58,PodSandboxId:6cf6a1264d2e
8a9aa42e8aa7912f42406c69e7d2b37c0e27949fc6242989b9db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694728223504061405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-235631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3422537e6e2a79365d4f294fe67c4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 5127fd29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ba5dbc3f-3df7-4292-b918-a0038ccc4243 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	7ed9ad5db91ad       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb            18 seconds ago      Running             hello-world-app           0                   2711f3fecc38b
	05711e58d5d8b       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   70a35ac2ee734
	59402151eaed4       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   7c3f34a0f5d6a
	74354c88c9816       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   cadf65eed398c
	dfeef3c2a8594       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   52a859aea5511
	04f879b61bcbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   49d2e5b476f5a
	d6f89b2839d1b       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   52bc791e135d9
	bc44e13d54481       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   043b474802ba1
	64df88c218865       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   067db534da579
	c4b188172297b       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   3bcf514da2130
	8e8342edb3dc3       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   d16b9c3723691
	66072c571e884       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   6cf6a1264d2e8
	
	* 
	* ==> coredns [bc44e13d5448149dd21ef2920049f815a7df36b348f46a044d8c727e729e6295] <==
	* [INFO] 10.244.0.6:53984 - 17830 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000161068s
	[INFO] 10.244.0.6:53984 - 32043 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000152488s
	[INFO] 10.244.0.6:53984 - 7420 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065243s
	[INFO] 10.244.0.6:53984 - 15168 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000126709s
	[INFO] 10.244.0.6:54911 - 57575 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000106893s
	[INFO] 10.244.0.6:54911 - 22322 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076902s
	[INFO] 10.244.0.6:54911 - 14487 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000116806s
	[INFO] 10.244.0.6:54911 - 45903 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041344s
	[INFO] 10.244.0.6:54911 - 10667 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035619s
	[INFO] 10.244.0.6:54911 - 20317 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065022s
	[INFO] 10.244.0.6:54911 - 54757 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070043s
	[INFO] 10.244.0.6:53861 - 23804 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000142329s
	[INFO] 10.244.0.6:48261 - 14028 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075168s
	[INFO] 10.244.0.6:48261 - 56010 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000119553s
	[INFO] 10.244.0.6:53861 - 47250 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069325s
	[INFO] 10.244.0.6:53861 - 17737 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067653s
	[INFO] 10.244.0.6:48261 - 51832 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055494s
	[INFO] 10.244.0.6:48261 - 10698 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063692s
	[INFO] 10.244.0.6:53861 - 59794 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005506s
	[INFO] 10.244.0.6:53861 - 7286 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060761s
	[INFO] 10.244.0.6:48261 - 21228 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000144274s
	[INFO] 10.244.0.6:53861 - 60753 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061216s
	[INFO] 10.244.0.6:48261 - 57989 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000126889s
	[INFO] 10.244.0.6:48261 - 33654 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068601s
	[INFO] 10.244.0.6:53861 - 32339 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000123591s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-235631
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-235631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=ingress-addon-legacy-235631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T21_50_32_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 21:50:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-235631
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 21:54:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 21:52:12 +0000   Thu, 14 Sep 2023 21:50:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 21:52:12 +0000   Thu, 14 Sep 2023 21:50:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 21:52:12 +0000   Thu, 14 Sep 2023 21:50:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 21:52:12 +0000   Thu, 14 Sep 2023 21:50:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ingress-addon-legacy-235631
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b0c2877dd194f60adf485ed2c8c07a3
	  System UUID:                9b0c2877-dd19-4f60-adf4-85ed2c8c07a3
	  Boot ID:                    5532b806-43da-48d3-a4ea-d4d7ec32fe57
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-fh8wp                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-66bff467f8-864g6                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-235631                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-apiserver-ingress-addon-legacy-235631             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-235631    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-proxy-9gptq                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-235631             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 4m9s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-235631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-235631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-235631 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m59s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m59s                kubelet     Node ingress-addon-legacy-235631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s                kubelet     Node ingress-addon-legacy-235631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s                kubelet     Node ingress-addon-legacy-235631 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m59s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m49s                kubelet     Node ingress-addon-legacy-235631 status is now: NodeReady
	  Normal  Starting                 3m42s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep14 21:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092795] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.258447] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.480075] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.122911] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Sep14 21:50] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.078520] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.097280] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.131609] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.089982] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.191137] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +7.706840] systemd-fstab-generator[1035]: Ignoring "noauto" for root device
	[  +3.023581] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +10.070262] systemd-fstab-generator[1445]: Ignoring "noauto" for root device
	[ +15.642992] kauditd_printk_skb: 6 callbacks suppressed
	[Sep14 21:51] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.800642] kauditd_printk_skb: 6 callbacks suppressed
	[ +18.241275] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.305917] kauditd_printk_skb: 3 callbacks suppressed
	[Sep14 21:54] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [64df88c2188656a255b7d102c377d3707995d164fb7b76dbf9a694fe2590e5c2] <==
	* raft2023/09/14 21:50:25 INFO: a69e859ffe38fcde switched to configuration voters=(12006180578827762910)
	2023-09-14 21:50:25.240198 W | auth: simple token is not cryptographically signed
	2023-09-14 21:50:25.245614 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-14 21:50:25.247498 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-14 21:50:25.247714 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-14 21:50:25.248165 I | etcdserver: a69e859ffe38fcde as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-14 21:50:25.248358 I | embed: listening for peers on 192.168.39.250:2380
	raft2023/09/14 21:50:25 INFO: a69e859ffe38fcde switched to configuration voters=(12006180578827762910)
	2023-09-14 21:50:25.248840 I | etcdserver/membership: added member a69e859ffe38fcde [https://192.168.39.250:2380] to cluster f7a04275a0bf31
	raft2023/09/14 21:50:26 INFO: a69e859ffe38fcde is starting a new election at term 1
	raft2023/09/14 21:50:26 INFO: a69e859ffe38fcde became candidate at term 2
	raft2023/09/14 21:50:26 INFO: a69e859ffe38fcde received MsgVoteResp from a69e859ffe38fcde at term 2
	raft2023/09/14 21:50:26 INFO: a69e859ffe38fcde became leader at term 2
	raft2023/09/14 21:50:26 INFO: raft.node: a69e859ffe38fcde elected leader a69e859ffe38fcde at term 2
	2023-09-14 21:50:26.233541 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-14 21:50:26.234821 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-14 21:50:26.234926 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-14 21:50:26.234961 I | embed: ready to serve client requests
	2023-09-14 21:50:26.235013 I | etcdserver: published {Name:ingress-addon-legacy-235631 ClientURLs:[https://192.168.39.250:2379]} to cluster f7a04275a0bf31
	2023-09-14 21:50:26.235354 I | embed: ready to serve client requests
	2023-09-14 21:50:26.236464 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-14 21:50:26.240401 I | embed: serving client requests on 192.168.39.250:2379
	2023-09-14 21:50:47.303411 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (101.78116ms) to execute
	2023-09-14 21:51:32.125096 W | etcdserver: read-only range request "key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" " with result "range_response_count:3 size:13726" took too long (199.061341ms) to execute
	2023-09-14 21:51:56.197838 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (236.623342ms) to execute
	
	* 
	* ==> kernel <==
	*  21:54:31 up 4 min,  0 users,  load average: 0.62, 0.43, 0.19
	Linux ingress-addon-legacy-235631 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [66072c571e884888f6ed1cf7665e8a84d20e2a63b5e9cfe5a021f3cc2104ec58] <==
	* W0914 21:50:30.625562       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.250]
	I0914 21:50:30.626369       1 controller.go:609] quota admission added evaluator for: endpoints
	I0914 21:50:30.633940       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 21:50:31.376218       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0914 21:50:32.361525       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0914 21:50:32.495268       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0914 21:50:32.851219       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 21:50:46.881369       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0914 21:50:47.117998       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0914 21:50:47.308282       1 trace.go:116] Trace[1041458263]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2023-09-14 21:50:46.785138998 +0000 UTC m=+23.127626821) (total time: 523.120521ms):
	Trace[1041458263]: [523.085765ms] [521.607931ms] Transaction committed
	I0914 21:50:47.308404       1 trace.go:116] Trace[1433171620]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/view,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:192.168.39.250 (started: 2023-09-14 21:50:46.784825023 +0000 UTC m=+23.127313276) (total time: 523.561741ms):
	Trace[1433171620]: [523.523293ms] [523.329037ms] Object stored in database
	I0914 21:50:47.308559       1 trace.go:116] Trace[73016085]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2023-09-14 21:50:46.788870794 +0000 UTC m=+23.131358635) (total time: 519.678475ms):
	Trace[73016085]: [519.662176ms] [519.288314ms] Transaction committed
	I0914 21:50:47.308621       1 trace.go:116] Trace[1264896448]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/admin,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:192.168.39.250 (started: 2023-09-14 21:50:46.788436224 +0000 UTC m=+23.130924125) (total time: 520.173435ms):
	Trace[1264896448]: [520.15059ms] [519.76998ms] Object stored in database
	I0914 21:50:47.308733       1 trace.go:116] Trace[2037560801]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2023-09-14 21:50:46.790040225 +0000 UTC m=+23.132528046) (total time: 518.684472ms):
	Trace[2037560801]: [518.65606ms] [517.56411ms] Transaction committed
	I0914 21:50:47.308862       1 trace.go:116] Trace[1825242339]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/edit,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:192.168.39.250 (started: 2023-09-14 21:50:46.789444713 +0000 UTC m=+23.131932828) (total time: 519.400882ms):
	Trace[1825242339]: [519.368634ms] [518.978052ms] Object stored in database
	I0914 21:50:47.317938       1 trace.go:116] Trace[1516628909]: "Create" url:/api/v1/namespaces/kube-node-lease/serviceaccounts,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:service-account-controller,client:192.168.39.250 (started: 2023-09-14 21:50:46.779954711 +0000 UTC m=+23.122442524) (total time: 537.956202ms):
	Trace[1516628909]: [535.581199ms] [535.545998ms] Object stored in database
	I0914 21:51:22.268329       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0914 21:51:45.505271       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [8e8342edb3dc3b3b4de73b04c0040dd1b969e3a8fa23d1675f1144ccff61d819] <==
	* I0914 21:50:47.342836       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"254b81f1-1bc0-4502-85ee-31fd514cc91c", APIVersion:"apps/v1", ResourceVersion:"226", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-9gptq
	I0914 21:50:47.344178       1 shared_informer.go:230] Caches are synced for disruption 
	I0914 21:50:47.344250       1 disruption.go:339] Sending events to api server.
	E0914 21:50:47.345610       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0914 21:50:47.351852       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0914 21:50:47.373314       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c0f5402e-02a8-476c-85ca-1aa9c5e41cc5", APIVersion:"apps/v1", ResourceVersion:"324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-6sf5k
	I0914 21:50:47.381837       1 shared_informer.go:230] Caches are synced for resource quota 
	I0914 21:50:47.381861       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0914 21:50:47.382051       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0914 21:50:47.408123       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0914 21:50:47.429634       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c0f5402e-02a8-476c-85ca-1aa9c5e41cc5", APIVersion:"apps/v1", ResourceVersion:"324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-864g6
	I0914 21:50:47.430009       1 shared_informer.go:230] Caches are synced for resource quota 
	I0914 21:50:47.578872       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0914 21:50:47.579034       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0914 21:50:47.809339       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"d8b79c75-7b69-40d2-8f30-ddcd605f1775", APIVersion:"apps/v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0914 21:50:47.910214       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c0f5402e-02a8-476c-85ca-1aa9c5e41cc5", APIVersion:"apps/v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-6sf5k
	I0914 21:51:22.243185       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"3cb3b78d-87a9-4330-89fd-14d451dcfbbd", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0914 21:51:22.268333       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"697f09d5-fbf2-41d0-a549-71692c3350d6", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-lp9rk
	I0914 21:51:22.315890       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1c717594-e264-4829-8bf0-ece6cc6ba8f7", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-jwzf6
	I0914 21:51:22.357294       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"30b10a21-d828-46a6-ae26-a35622349b23", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-tl4nq
	I0914 21:51:27.055395       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1c717594-e264-4829-8bf0-ece6cc6ba8f7", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0914 21:51:28.070722       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"30b10a21-d828-46a6-ae26-a35622349b23", APIVersion:"batch/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0914 21:54:09.158442       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"1cee8414-ce84-4c21-94b9-70b2d61805b3", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0914 21:54:09.187270       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"5785f245-378d-437d-aa41-d7a5df34b0a8", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-fh8wp
	E0914 21:54:28.330983       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-5pgkm" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [d6f89b2839d1bea2f2a9247ec62c4872882c93a59270b920573e0ff9ab9b0d0a] <==
	* W0914 21:50:49.407017       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0914 21:50:49.415537       1 node.go:136] Successfully retrieved node IP: 192.168.39.250
	I0914 21:50:49.415581       1 server_others.go:186] Using iptables Proxier.
	I0914 21:50:49.415739       1 server.go:583] Version: v1.18.20
	I0914 21:50:49.418528       1 config.go:133] Starting endpoints config controller
	I0914 21:50:49.418592       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0914 21:50:49.418620       1 config.go:315] Starting service config controller
	I0914 21:50:49.418623       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0914 21:50:49.518741       1 shared_informer.go:230] Caches are synced for service config 
	I0914 21:50:49.518745       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [c4b188172297bcdbc23b44db211fc04ba87eba03fe4211e68e080e95f76bdf7b] <==
	* I0914 21:50:29.140145       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0914 21:50:29.141980       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0914 21:50:29.142080       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 21:50:29.142088       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 21:50:29.142097       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0914 21:50:29.145518       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 21:50:29.145642       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 21:50:29.145735       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 21:50:29.145869       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 21:50:29.146015       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 21:50:29.146507       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 21:50:29.146615       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 21:50:29.146696       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:50:29.146818       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 21:50:29.147008       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 21:50:29.147143       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 21:50:29.147221       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 21:50:29.993010       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 21:50:30.000301       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 21:50:30.001965       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:50:30.094432       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 21:50:30.266125       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 21:50:30.281712       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 21:50:30.337314       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0914 21:50:30.642312       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 21:49:57 UTC, ends at Thu 2023-09-14 21:54:31 UTC. --
	Sep 14 21:51:29 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:51:29.344930    1452 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f102b786-6bb7-4f6e-a917-e4abb3a6a7fc-ingress-nginx-admission-token-t58ms" (OuterVolumeSpecName: "ingress-nginx-admission-token-t58ms") pod "f102b786-6bb7-4f6e-a917-e4abb3a6a7fc" (UID: "f102b786-6bb7-4f6e-a917-e4abb3a6a7fc"). InnerVolumeSpecName "ingress-nginx-admission-token-t58ms". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 21:51:29 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:51:29.438522    1452 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-t58ms" (UniqueName: "kubernetes.io/secret/f102b786-6bb7-4f6e-a917-e4abb3a6a7fc-ingress-nginx-admission-token-t58ms") on node "ingress-addon-legacy-235631" DevicePath ""
	Sep 14 21:51:36 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:51:36.459073    1452 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 14 21:51:36 ingress-addon-legacy-235631 kubelet[1452]: E0914 21:51:36.462051    1452 reflector.go:178] object-"kube-system"/"minikube-ingress-dns-token-l5xj4": Failed to list *v1.Secret: secrets "minikube-ingress-dns-token-l5xj4" is forbidden: User "system:node:ingress-addon-legacy-235631" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "ingress-addon-legacy-235631" and this object
	Sep 14 21:51:36 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:51:36.561364    1452 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-l5xj4" (UniqueName: "kubernetes.io/secret/88d1cc1b-09b8-448c-a113-a3d0b7ac8e53-minikube-ingress-dns-token-l5xj4") pod "kube-ingress-dns-minikube" (UID: "88d1cc1b-09b8-448c-a113-a3d0b7ac8e53")
	Sep 14 21:51:45 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:51:45.695601    1452 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 14 21:51:45 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:51:45.790139    1452 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9hzlx" (UniqueName: "kubernetes.io/secret/f6f0e50f-6590-4f9a-9bf3-1aec235e7747-default-token-9hzlx") pod "nginx" (UID: "f6f0e50f-6590-4f9a-9bf3-1aec235e7747")
	Sep 14 21:54:09 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:09.197170    1452 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 14 21:54:09 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:09.303827    1452 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9hzlx" (UniqueName: "kubernetes.io/secret/b90e77ee-ad98-4bb9-831e-82e11998180b-default-token-9hzlx") pod "hello-world-app-5f5d8b66bb-fh8wp" (UID: "b90e77ee-ad98-4bb9-831e-82e11998180b")
	Sep 14 21:54:11 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:11.016840    1452 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 6d47a38c725974d1e28d84ba0fe69b63c67eb8459137b1a55b60a82e1ed24522
	Sep 14 21:54:11 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:11.057851    1452 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 6d47a38c725974d1e28d84ba0fe69b63c67eb8459137b1a55b60a82e1ed24522
	Sep 14 21:54:11 ingress-addon-legacy-235631 kubelet[1452]: E0914 21:54:11.058348    1452 remote_runtime.go:295] ContainerStatus "6d47a38c725974d1e28d84ba0fe69b63c67eb8459137b1a55b60a82e1ed24522" from runtime service failed: rpc error: code = NotFound desc = could not find container "6d47a38c725974d1e28d84ba0fe69b63c67eb8459137b1a55b60a82e1ed24522": container with ID starting with 6d47a38c725974d1e28d84ba0fe69b63c67eb8459137b1a55b60a82e1ed24522 not found: ID does not exist
	Sep 14 21:54:11 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:11.209877    1452 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-l5xj4" (UniqueName: "kubernetes.io/secret/88d1cc1b-09b8-448c-a113-a3d0b7ac8e53-minikube-ingress-dns-token-l5xj4") pod "88d1cc1b-09b8-448c-a113-a3d0b7ac8e53" (UID: "88d1cc1b-09b8-448c-a113-a3d0b7ac8e53")
	Sep 14 21:54:11 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:11.223124    1452 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d1cc1b-09b8-448c-a113-a3d0b7ac8e53-minikube-ingress-dns-token-l5xj4" (OuterVolumeSpecName: "minikube-ingress-dns-token-l5xj4") pod "88d1cc1b-09b8-448c-a113-a3d0b7ac8e53" (UID: "88d1cc1b-09b8-448c-a113-a3d0b7ac8e53"). InnerVolumeSpecName "minikube-ingress-dns-token-l5xj4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 21:54:11 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:11.310180    1452 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-l5xj4" (UniqueName: "kubernetes.io/secret/88d1cc1b-09b8-448c-a113-a3d0b7ac8e53-minikube-ingress-dns-token-l5xj4") on node "ingress-addon-legacy-235631" DevicePath ""
	Sep 14 21:54:23 ingress-addon-legacy-235631 kubelet[1452]: E0914 21:54:23.684966    1452 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-lp9rk.1784e28f0484cfd5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-lp9rk", UID:"e29e9775-5307-46cd-8534-8255f7f7739d", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-235631"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc138fe23e8a8f9d5, ext:231350650111, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc138fe23e8a8f9d5, ext:231350650111, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-lp9rk.1784e28f0484cfd5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 14 21:54:23 ingress-addon-legacy-235631 kubelet[1452]: E0914 21:54:23.697329    1452 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-lp9rk.1784e28f0484cfd5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-lp9rk", UID:"e29e9775-5307-46cd-8534-8255f7f7739d", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-235631"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc138fe23e8a8f9d5, ext:231350650111, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc138fe23e92e4693, ext:231359386045, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-lp9rk.1784e28f0484cfd5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 14 21:54:26 ingress-addon-legacy-235631 kubelet[1452]: W0914 21:54:26.073286    1452 pod_container_deletor.go:77] Container "7c3f34a0f5d6acd16ac5ac9f234a4377e083966642d6ed29a21c150d554397b5" not found in pod's containers
	Sep 14 21:54:27 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:27.864018    1452 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-4tchg" (UniqueName: "kubernetes.io/secret/e29e9775-5307-46cd-8534-8255f7f7739d-ingress-nginx-token-4tchg") pod "e29e9775-5307-46cd-8534-8255f7f7739d" (UID: "e29e9775-5307-46cd-8534-8255f7f7739d")
	Sep 14 21:54:27 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:27.864080    1452 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e29e9775-5307-46cd-8534-8255f7f7739d-webhook-cert") pod "e29e9775-5307-46cd-8534-8255f7f7739d" (UID: "e29e9775-5307-46cd-8534-8255f7f7739d")
	Sep 14 21:54:27 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:27.866413    1452 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29e9775-5307-46cd-8534-8255f7f7739d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e29e9775-5307-46cd-8534-8255f7f7739d" (UID: "e29e9775-5307-46cd-8534-8255f7f7739d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 21:54:27 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:27.868694    1452 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29e9775-5307-46cd-8534-8255f7f7739d-ingress-nginx-token-4tchg" (OuterVolumeSpecName: "ingress-nginx-token-4tchg") pod "e29e9775-5307-46cd-8534-8255f7f7739d" (UID: "e29e9775-5307-46cd-8534-8255f7f7739d"). InnerVolumeSpecName "ingress-nginx-token-4tchg". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 21:54:27 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:27.964356    1452 reconciler.go:319] Volume detached for volume "ingress-nginx-token-4tchg" (UniqueName: "kubernetes.io/secret/e29e9775-5307-46cd-8534-8255f7f7739d-ingress-nginx-token-4tchg") on node "ingress-addon-legacy-235631" DevicePath ""
	Sep 14 21:54:27 ingress-addon-legacy-235631 kubelet[1452]: I0914 21:54:27.964382    1452 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e29e9775-5307-46cd-8534-8255f7f7739d-webhook-cert") on node "ingress-addon-legacy-235631" DevicePath ""
	Sep 14 21:54:28 ingress-addon-legacy-235631 kubelet[1452]: W0914 21:54:28.843963    1452 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/e29e9775-5307-46cd-8534-8255f7f7739d/volumes" does not exist
	
	* 
	* ==> storage-provisioner [04f879b61bcbc51668ce1a2063526147b469d618c594a075a16e3193e5bef666] <==
	* I0914 21:50:49.649171       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 21:50:49.661341       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 21:50:49.661410       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 21:50:49.674846       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 21:50:49.675567       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79d3ee0c-790b-473c-9a13-df6c40ce1af8", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-235631_b4a5e0e3-1379-44cc-9cc9-4f9e0bfbf289 became leader
	I0914 21:50:49.675827       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-235631_b4a5e0e3-1379-44cc-9cc9-4f9e0bfbf289!
	I0914 21:50:49.776967       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-235631_b4a5e0e3-1379-44cc-9cc9-4f9e0bfbf289!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-235631 -n ingress-addon-legacy-235631
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-235631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (175.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-lv55w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-lv55w -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-lv55w -- sh -c "ping -c 1 192.168.39.1": exit status 1 (161.93789ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-lv55w): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-pmkvp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-pmkvp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-pmkvp -- sh -c "ping -c 1 192.168.39.1": exit status 1 (156.043925ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-pmkvp): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-124911 -n multinode-124911
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-124911 logs -n 25: (1.26265224s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-919267 ssh -- ls                    | mount-start-2-919267 | jenkins | v1.31.2 | 14 Sep 23 21:58 UTC | 14 Sep 23 21:58 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-919267 ssh --                       | mount-start-2-919267 | jenkins | v1.31.2 | 14 Sep 23 21:58 UTC | 14 Sep 23 21:58 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-919267                           | mount-start-2-919267 | jenkins | v1.31.2 | 14 Sep 23 21:58 UTC | 14 Sep 23 21:58 UTC |
	| start   | -p mount-start-2-919267                           | mount-start-2-919267 | jenkins | v1.31.2 | 14 Sep 23 21:58 UTC | 14 Sep 23 21:58 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-919267 | jenkins | v1.31.2 | 14 Sep 23 21:58 UTC |                     |
	|         | --profile mount-start-2-919267                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-919267 ssh -- ls                    | mount-start-2-919267 | jenkins | v1.31.2 | 14 Sep 23 21:58 UTC | 14 Sep 23 21:58 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-919267 ssh --                       | mount-start-2-919267 | jenkins | v1.31.2 | 14 Sep 23 21:58 UTC | 14 Sep 23 21:58 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-919267                           | mount-start-2-919267 | jenkins | v1.31.2 | 14 Sep 23 21:58 UTC | 14 Sep 23 21:58 UTC |
	| delete  | -p mount-start-1-906327                           | mount-start-1-906327 | jenkins | v1.31.2 | 14 Sep 23 21:58 UTC | 14 Sep 23 21:58 UTC |
	| start   | -p multinode-124911                               | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 21:58 UTC | 14 Sep 23 22:00 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- apply -f                   | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- rollout                    | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- get pods -o                | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- get pods -o                | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- exec                       | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | busybox-5bc68d56bd-lv55w --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- exec                       | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | busybox-5bc68d56bd-pmkvp --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- exec                       | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | busybox-5bc68d56bd-lv55w --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- exec                       | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | busybox-5bc68d56bd-pmkvp --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- exec                       | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | busybox-5bc68d56bd-lv55w -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- exec                       | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | busybox-5bc68d56bd-pmkvp -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- get pods -o                | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- exec                       | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:00 UTC | 14 Sep 23 22:00 UTC |
	|         | busybox-5bc68d56bd-lv55w                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- exec                       | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC |                     |
	|         | busybox-5bc68d56bd-lv55w -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- exec                       | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | busybox-5bc68d56bd-pmkvp                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-124911 -- exec                       | multinode-124911     | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC |                     |
	|         | busybox-5bc68d56bd-pmkvp -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 21:58:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 21:58:37.293632   25747 out.go:296] Setting OutFile to fd 1 ...
	I0914 21:58:37.293857   25747 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:58:37.293865   25747 out.go:309] Setting ErrFile to fd 2...
	I0914 21:58:37.293870   25747 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:58:37.294029   25747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 21:58:37.294555   25747 out.go:303] Setting JSON to false
	I0914 21:58:37.295397   25747 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2460,"bootTime":1694726258,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 21:58:37.295451   25747 start.go:138] virtualization: kvm guest
	I0914 21:58:37.297493   25747 out.go:177] * [multinode-124911] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 21:58:37.298892   25747 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 21:58:37.300226   25747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 21:58:37.298933   25747 notify.go:220] Checking for updates...
	I0914 21:58:37.302776   25747 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:58:37.304028   25747 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:58:37.305292   25747 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 21:58:37.306704   25747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 21:58:37.308103   25747 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 21:58:37.342529   25747 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 21:58:37.343997   25747 start.go:298] selected driver: kvm2
	I0914 21:58:37.344022   25747 start.go:902] validating driver "kvm2" against <nil>
	I0914 21:58:37.344038   25747 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 21:58:37.344751   25747 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 21:58:37.344844   25747 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 21:58:37.358365   25747 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 21:58:37.358420   25747 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 21:58:37.358616   25747 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 21:58:37.358659   25747 cni.go:84] Creating CNI manager for ""
	I0914 21:58:37.358675   25747 cni.go:136] 0 nodes found, recommending kindnet
	I0914 21:58:37.358689   25747 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 21:58:37.358700   25747 start_flags.go:321] config:
	{Name:multinode-124911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:58:37.358841   25747 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 21:58:37.360622   25747 out.go:177] * Starting control plane node multinode-124911 in cluster multinode-124911
	I0914 21:58:37.361914   25747 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 21:58:37.361972   25747 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0914 21:58:37.361992   25747 cache.go:57] Caching tarball of preloaded images
	I0914 21:58:37.362084   25747 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 21:58:37.362098   25747 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 21:58:37.362421   25747 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 21:58:37.362448   25747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json: {Name:mk6248422d8895d33777c762c7010ce5bab29dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:58:37.362575   25747 start.go:365] acquiring machines lock for multinode-124911: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 21:58:37.362611   25747 start.go:369] acquired machines lock for "multinode-124911" in 21.651µs
	I0914 21:58:37.362632   25747 start.go:93] Provisioning new machine with config: &{Name:multinode-124911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 21:58:37.362691   25747 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 21:58:37.364305   25747 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 21:58:37.364436   25747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:58:37.364484   25747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:58:37.377416   25747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0914 21:58:37.377771   25747 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:58:37.378252   25747 main.go:141] libmachine: Using API Version  1
	I0914 21:58:37.378277   25747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:58:37.378633   25747 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:58:37.378808   25747 main.go:141] libmachine: (multinode-124911) Calling .GetMachineName
	I0914 21:58:37.378970   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 21:58:37.379109   25747 start.go:159] libmachine.API.Create for "multinode-124911" (driver="kvm2")
	I0914 21:58:37.379133   25747 client.go:168] LocalClient.Create starting
	I0914 21:58:37.379159   25747 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem
	I0914 21:58:37.379194   25747 main.go:141] libmachine: Decoding PEM data...
	I0914 21:58:37.379210   25747 main.go:141] libmachine: Parsing certificate...
	I0914 21:58:37.379252   25747 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem
	I0914 21:58:37.379277   25747 main.go:141] libmachine: Decoding PEM data...
	I0914 21:58:37.379289   25747 main.go:141] libmachine: Parsing certificate...
	I0914 21:58:37.379316   25747 main.go:141] libmachine: Running pre-create checks...
	I0914 21:58:37.379326   25747 main.go:141] libmachine: (multinode-124911) Calling .PreCreateCheck
	I0914 21:58:37.379678   25747 main.go:141] libmachine: (multinode-124911) Calling .GetConfigRaw
	I0914 21:58:37.380028   25747 main.go:141] libmachine: Creating machine...
	I0914 21:58:37.380041   25747 main.go:141] libmachine: (multinode-124911) Calling .Create
	I0914 21:58:37.380147   25747 main.go:141] libmachine: (multinode-124911) Creating KVM machine...
	I0914 21:58:37.381219   25747 main.go:141] libmachine: (multinode-124911) DBG | found existing default KVM network
	I0914 21:58:37.381865   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:37.381713   25770 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a30}
	I0914 21:58:37.386584   25747 main.go:141] libmachine: (multinode-124911) DBG | trying to create private KVM network mk-multinode-124911 192.168.39.0/24...
	I0914 21:58:37.454567   25747 main.go:141] libmachine: (multinode-124911) Setting up store path in /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911 ...
	I0914 21:58:37.454614   25747 main.go:141] libmachine: (multinode-124911) DBG | private KVM network mk-multinode-124911 192.168.39.0/24 created
	I0914 21:58:37.454633   25747 main.go:141] libmachine: (multinode-124911) Building disk image from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso
	I0914 21:58:37.454671   25747 main.go:141] libmachine: (multinode-124911) Downloading /home/jenkins/minikube-integration/17243-6287/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso...
	I0914 21:58:37.454710   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:37.454441   25770 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:58:37.661208   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:37.661088   25770 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa...
	I0914 21:58:37.726606   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:37.726458   25770 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/multinode-124911.rawdisk...
	I0914 21:58:37.726649   25747 main.go:141] libmachine: (multinode-124911) DBG | Writing magic tar header
	I0914 21:58:37.726689   25747 main.go:141] libmachine: (multinode-124911) DBG | Writing SSH key tar header
	I0914 21:58:37.726715   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:37.726598   25770 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911 ...
	I0914 21:58:37.726753   25747 main.go:141] libmachine: (multinode-124911) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911 (perms=drwx------)
	I0914 21:58:37.726780   25747 main.go:141] libmachine: (multinode-124911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911
	I0914 21:58:37.726789   25747 main.go:141] libmachine: (multinode-124911) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines (perms=drwxr-xr-x)
	I0914 21:58:37.726801   25747 main.go:141] libmachine: (multinode-124911) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube (perms=drwxr-xr-x)
	I0914 21:58:37.726812   25747 main.go:141] libmachine: (multinode-124911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines
	I0914 21:58:37.726827   25747 main.go:141] libmachine: (multinode-124911) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287 (perms=drwxrwxr-x)
	I0914 21:58:37.726844   25747 main.go:141] libmachine: (multinode-124911) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 21:58:37.726859   25747 main.go:141] libmachine: (multinode-124911) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 21:58:37.726872   25747 main.go:141] libmachine: (multinode-124911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:58:37.726884   25747 main.go:141] libmachine: (multinode-124911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287
	I0914 21:58:37.726895   25747 main.go:141] libmachine: (multinode-124911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 21:58:37.726902   25747 main.go:141] libmachine: (multinode-124911) Creating domain...
	I0914 21:58:37.726918   25747 main.go:141] libmachine: (multinode-124911) DBG | Checking permissions on dir: /home/jenkins
	I0914 21:58:37.726929   25747 main.go:141] libmachine: (multinode-124911) DBG | Checking permissions on dir: /home
	I0914 21:58:37.726945   25747 main.go:141] libmachine: (multinode-124911) DBG | Skipping /home - not owner
	I0914 21:58:37.728016   25747 main.go:141] libmachine: (multinode-124911) define libvirt domain using xml: 
	I0914 21:58:37.728034   25747 main.go:141] libmachine: (multinode-124911) <domain type='kvm'>
	I0914 21:58:37.728041   25747 main.go:141] libmachine: (multinode-124911)   <name>multinode-124911</name>
	I0914 21:58:37.728047   25747 main.go:141] libmachine: (multinode-124911)   <memory unit='MiB'>2200</memory>
	I0914 21:58:37.728053   25747 main.go:141] libmachine: (multinode-124911)   <vcpu>2</vcpu>
	I0914 21:58:37.728065   25747 main.go:141] libmachine: (multinode-124911)   <features>
	I0914 21:58:37.728073   25747 main.go:141] libmachine: (multinode-124911)     <acpi/>
	I0914 21:58:37.728082   25747 main.go:141] libmachine: (multinode-124911)     <apic/>
	I0914 21:58:37.728092   25747 main.go:141] libmachine: (multinode-124911)     <pae/>
	I0914 21:58:37.728109   25747 main.go:141] libmachine: (multinode-124911)     
	I0914 21:58:37.728115   25747 main.go:141] libmachine: (multinode-124911)   </features>
	I0914 21:58:37.728121   25747 main.go:141] libmachine: (multinode-124911)   <cpu mode='host-passthrough'>
	I0914 21:58:37.728130   25747 main.go:141] libmachine: (multinode-124911)   
	I0914 21:58:37.728141   25747 main.go:141] libmachine: (multinode-124911)   </cpu>
	I0914 21:58:37.728147   25747 main.go:141] libmachine: (multinode-124911)   <os>
	I0914 21:58:37.728156   25747 main.go:141] libmachine: (multinode-124911)     <type>hvm</type>
	I0914 21:58:37.728166   25747 main.go:141] libmachine: (multinode-124911)     <boot dev='cdrom'/>
	I0914 21:58:37.728175   25747 main.go:141] libmachine: (multinode-124911)     <boot dev='hd'/>
	I0914 21:58:37.728189   25747 main.go:141] libmachine: (multinode-124911)     <bootmenu enable='no'/>
	I0914 21:58:37.728198   25747 main.go:141] libmachine: (multinode-124911)   </os>
	I0914 21:58:37.728212   25747 main.go:141] libmachine: (multinode-124911)   <devices>
	I0914 21:58:37.728226   25747 main.go:141] libmachine: (multinode-124911)     <disk type='file' device='cdrom'>
	I0914 21:58:37.728255   25747 main.go:141] libmachine: (multinode-124911)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/boot2docker.iso'/>
	I0914 21:58:37.728271   25747 main.go:141] libmachine: (multinode-124911)       <target dev='hdc' bus='scsi'/>
	I0914 21:58:37.728281   25747 main.go:141] libmachine: (multinode-124911)       <readonly/>
	I0914 21:58:37.728293   25747 main.go:141] libmachine: (multinode-124911)     </disk>
	I0914 21:58:37.728309   25747 main.go:141] libmachine: (multinode-124911)     <disk type='file' device='disk'>
	I0914 21:58:37.728326   25747 main.go:141] libmachine: (multinode-124911)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 21:58:37.728341   25747 main.go:141] libmachine: (multinode-124911)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/multinode-124911.rawdisk'/>
	I0914 21:58:37.728351   25747 main.go:141] libmachine: (multinode-124911)       <target dev='hda' bus='virtio'/>
	I0914 21:58:37.728356   25747 main.go:141] libmachine: (multinode-124911)     </disk>
	I0914 21:58:37.728365   25747 main.go:141] libmachine: (multinode-124911)     <interface type='network'>
	I0914 21:58:37.728374   25747 main.go:141] libmachine: (multinode-124911)       <source network='mk-multinode-124911'/>
	I0914 21:58:37.728387   25747 main.go:141] libmachine: (multinode-124911)       <model type='virtio'/>
	I0914 21:58:37.728402   25747 main.go:141] libmachine: (multinode-124911)     </interface>
	I0914 21:58:37.728412   25747 main.go:141] libmachine: (multinode-124911)     <interface type='network'>
	I0914 21:58:37.728426   25747 main.go:141] libmachine: (multinode-124911)       <source network='default'/>
	I0914 21:58:37.728439   25747 main.go:141] libmachine: (multinode-124911)       <model type='virtio'/>
	I0914 21:58:37.728451   25747 main.go:141] libmachine: (multinode-124911)     </interface>
	I0914 21:58:37.728465   25747 main.go:141] libmachine: (multinode-124911)     <serial type='pty'>
	I0914 21:58:37.728474   25747 main.go:141] libmachine: (multinode-124911)       <target port='0'/>
	I0914 21:58:37.728481   25747 main.go:141] libmachine: (multinode-124911)     </serial>
	I0914 21:58:37.728495   25747 main.go:141] libmachine: (multinode-124911)     <console type='pty'>
	I0914 21:58:37.728509   25747 main.go:141] libmachine: (multinode-124911)       <target type='serial' port='0'/>
	I0914 21:58:37.728520   25747 main.go:141] libmachine: (multinode-124911)     </console>
	I0914 21:58:37.728532   25747 main.go:141] libmachine: (multinode-124911)     <rng model='virtio'>
	I0914 21:58:37.728547   25747 main.go:141] libmachine: (multinode-124911)       <backend model='random'>/dev/random</backend>
	I0914 21:58:37.728559   25747 main.go:141] libmachine: (multinode-124911)     </rng>
	I0914 21:58:37.728588   25747 main.go:141] libmachine: (multinode-124911)     
	I0914 21:58:37.728616   25747 main.go:141] libmachine: (multinode-124911)     
	I0914 21:58:37.728631   25747 main.go:141] libmachine: (multinode-124911)   </devices>
	I0914 21:58:37.728643   25747 main.go:141] libmachine: (multinode-124911) </domain>
	I0914 21:58:37.728659   25747 main.go:141] libmachine: (multinode-124911) 
	I0914 21:58:37.732722   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:22:0f:b1 in network default
	I0914 21:58:37.733343   25747 main.go:141] libmachine: (multinode-124911) Ensuring networks are active...
	I0914 21:58:37.733369   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:37.734047   25747 main.go:141] libmachine: (multinode-124911) Ensuring network default is active
	I0914 21:58:37.734346   25747 main.go:141] libmachine: (multinode-124911) Ensuring network mk-multinode-124911 is active
	I0914 21:58:37.734871   25747 main.go:141] libmachine: (multinode-124911) Getting domain xml...
	I0914 21:58:37.735524   25747 main.go:141] libmachine: (multinode-124911) Creating domain...
	I0914 21:58:38.948752   25747 main.go:141] libmachine: (multinode-124911) Waiting to get IP...
	I0914 21:58:38.949422   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:38.949780   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:38.949812   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:38.949763   25770 retry.go:31] will retry after 295.366173ms: waiting for machine to come up
	I0914 21:58:39.246410   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:39.246870   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:39.246909   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:39.246840   25770 retry.go:31] will retry after 245.31875ms: waiting for machine to come up
	I0914 21:58:39.493208   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:39.493557   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:39.493585   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:39.493522   25770 retry.go:31] will retry after 350.509025ms: waiting for machine to come up
	I0914 21:58:39.846218   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:39.846566   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:39.846597   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:39.846508   25770 retry.go:31] will retry after 512.995593ms: waiting for machine to come up
	I0914 21:58:40.361132   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:40.361605   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:40.361637   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:40.361522   25770 retry.go:31] will retry after 635.303637ms: waiting for machine to come up
	I0914 21:58:40.998588   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:40.999045   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:40.999079   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:40.998997   25770 retry.go:31] will retry after 796.733122ms: waiting for machine to come up
	I0914 21:58:41.796860   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:41.797340   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:41.797416   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:41.797319   25770 retry.go:31] will retry after 786.856359ms: waiting for machine to come up
	I0914 21:58:42.585621   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:42.586112   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:42.586139   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:42.586063   25770 retry.go:31] will retry after 1.437324685s: waiting for machine to come up
	I0914 21:58:44.025625   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:44.026052   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:44.026085   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:44.025996   25770 retry.go:31] will retry after 1.49014662s: waiting for machine to come up
	I0914 21:58:45.518580   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:45.518931   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:45.518960   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:45.518892   25770 retry.go:31] will retry after 2.08228381s: waiting for machine to come up
	I0914 21:58:47.602909   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:47.603327   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:47.603357   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:47.603285   25770 retry.go:31] will retry after 2.350153157s: waiting for machine to come up
	I0914 21:58:49.956713   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:49.957132   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:49.957165   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:49.957083   25770 retry.go:31] will retry after 2.967849335s: waiting for machine to come up
	I0914 21:58:52.926747   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:52.927144   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:52.927167   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:52.927118   25770 retry.go:31] will retry after 3.356363636s: waiting for machine to come up
	I0914 21:58:56.287780   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:58:56.288240   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 21:58:56.288267   25747 main.go:141] libmachine: (multinode-124911) DBG | I0914 21:58:56.288154   25770 retry.go:31] will retry after 4.706329835s: waiting for machine to come up
	I0914 21:59:00.998664   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:00.999055   25747 main.go:141] libmachine: (multinode-124911) Found IP for machine: 192.168.39.116
	I0914 21:59:00.999078   25747 main.go:141] libmachine: (multinode-124911) Reserving static IP address...
	I0914 21:59:00.999104   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has current primary IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:00.999428   25747 main.go:141] libmachine: (multinode-124911) DBG | unable to find host DHCP lease matching {name: "multinode-124911", mac: "52:54:00:97:3f:c1", ip: "192.168.39.116"} in network mk-multinode-124911
	I0914 21:59:01.067931   25747 main.go:141] libmachine: (multinode-124911) DBG | Getting to WaitForSSH function...
	I0914 21:59:01.067964   25747 main.go:141] libmachine: (multinode-124911) Reserved static IP address: 192.168.39.116
	I0914 21:59:01.068017   25747 main.go:141] libmachine: (multinode-124911) Waiting for SSH to be available...
	I0914 21:59:01.070508   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.070867   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:minikube Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:01.070902   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.071050   25747 main.go:141] libmachine: (multinode-124911) DBG | Using SSH client type: external
	I0914 21:59:01.071079   25747 main.go:141] libmachine: (multinode-124911) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa (-rw-------)
	I0914 21:59:01.071130   25747 main.go:141] libmachine: (multinode-124911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 21:59:01.071154   25747 main.go:141] libmachine: (multinode-124911) DBG | About to run SSH command:
	I0914 21:59:01.071170   25747 main.go:141] libmachine: (multinode-124911) DBG | exit 0
	I0914 21:59:01.163495   25747 main.go:141] libmachine: (multinode-124911) DBG | SSH cmd err, output: <nil>: 
	I0914 21:59:01.163761   25747 main.go:141] libmachine: (multinode-124911) KVM machine creation complete!
	I0914 21:59:01.164037   25747 main.go:141] libmachine: (multinode-124911) Calling .GetConfigRaw
	I0914 21:59:01.164507   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 21:59:01.164690   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 21:59:01.164847   25747 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 21:59:01.164861   25747 main.go:141] libmachine: (multinode-124911) Calling .GetState
	I0914 21:59:01.166003   25747 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 21:59:01.166016   25747 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 21:59:01.166022   25747 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 21:59:01.166029   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:01.168202   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.168555   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:01.168589   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.168703   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:01.168865   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.169029   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.169162   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:01.169350   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 21:59:01.169672   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 21:59:01.169684   25747 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 21:59:01.282177   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 21:59:01.282203   25747 main.go:141] libmachine: Detecting the provisioner...
	I0914 21:59:01.282222   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:01.284852   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.285210   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:01.285245   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.285387   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:01.285582   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.285734   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.285919   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:01.286129   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 21:59:01.286436   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 21:59:01.286447   25747 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 21:59:01.399669   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g52d8811-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0914 21:59:01.399790   25747 main.go:141] libmachine: found compatible host: buildroot
	I0914 21:59:01.399810   25747 main.go:141] libmachine: Provisioning with buildroot...
	I0914 21:59:01.399823   25747 main.go:141] libmachine: (multinode-124911) Calling .GetMachineName
	I0914 21:59:01.400083   25747 buildroot.go:166] provisioning hostname "multinode-124911"
	I0914 21:59:01.400106   25747 main.go:141] libmachine: (multinode-124911) Calling .GetMachineName
	I0914 21:59:01.400294   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:01.402823   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.403201   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:01.403224   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.403351   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:01.403545   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.403709   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.403891   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:01.404039   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 21:59:01.404330   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 21:59:01.404341   25747 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-124911 && echo "multinode-124911" | sudo tee /etc/hostname
	I0914 21:59:01.530655   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-124911
	
	I0914 21:59:01.530688   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:01.533253   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.533673   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:01.533703   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.533872   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:01.534052   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.534186   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.534363   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:01.534528   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 21:59:01.535000   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 21:59:01.535030   25747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-124911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-124911/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-124911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 21:59:01.658270   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 21:59:01.658302   25747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 21:59:01.658356   25747 buildroot.go:174] setting up certificates
	I0914 21:59:01.658368   25747 provision.go:83] configureAuth start
	I0914 21:59:01.658382   25747 main.go:141] libmachine: (multinode-124911) Calling .GetMachineName
	I0914 21:59:01.658719   25747 main.go:141] libmachine: (multinode-124911) Calling .GetIP
	I0914 21:59:01.661717   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.662098   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:01.662156   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.662232   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:01.664389   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.664682   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:01.664711   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.664851   25747 provision.go:138] copyHostCerts
	I0914 21:59:01.664879   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 21:59:01.664915   25747 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 21:59:01.664925   25747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 21:59:01.664975   25747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 21:59:01.665056   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 21:59:01.665079   25747 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 21:59:01.665086   25747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 21:59:01.665106   25747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 21:59:01.665162   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 21:59:01.665177   25747 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 21:59:01.665183   25747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 21:59:01.665199   25747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 21:59:01.665261   25747 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.multinode-124911 san=[192.168.39.116 192.168.39.116 localhost 127.0.0.1 minikube multinode-124911]
	I0914 21:59:01.789512   25747 provision.go:172] copyRemoteCerts
	I0914 21:59:01.789568   25747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 21:59:01.789603   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:01.792334   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.792651   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:01.792679   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.792868   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:01.793061   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.793228   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:01.793343   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 21:59:01.880390   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 21:59:01.880458   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 21:59:01.900350   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 21:59:01.900416   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 21:59:01.920439   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 21:59:01.920495   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 21:59:01.940713   25747 provision.go:86] duration metric: configureAuth took 282.333999ms
	I0914 21:59:01.940735   25747 buildroot.go:189] setting minikube options for container-runtime
	I0914 21:59:01.940879   25747 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 21:59:01.940939   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:01.943352   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.943690   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:01.943725   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:01.943893   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:01.944094   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.944255   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:01.944383   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:01.944526   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 21:59:01.944833   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 21:59:01.944851   25747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 21:59:02.232054   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 21:59:02.232078   25747 main.go:141] libmachine: Checking connection to Docker...
	I0914 21:59:02.232087   25747 main.go:141] libmachine: (multinode-124911) Calling .GetURL
	I0914 21:59:02.233262   25747 main.go:141] libmachine: (multinode-124911) DBG | Using libvirt version 6000000
	I0914 21:59:02.235666   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.235996   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:02.236029   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.236212   25747 main.go:141] libmachine: Docker is up and running!
	I0914 21:59:02.236229   25747 main.go:141] libmachine: Reticulating splines...
	I0914 21:59:02.236238   25747 client.go:171] LocalClient.Create took 24.857095897s
	I0914 21:59:02.236262   25747 start.go:167] duration metric: libmachine.API.Create for "multinode-124911" took 24.857155449s
	I0914 21:59:02.236276   25747 start.go:300] post-start starting for "multinode-124911" (driver="kvm2")
	I0914 21:59:02.236286   25747 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 21:59:02.236299   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 21:59:02.236592   25747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 21:59:02.236618   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:02.238797   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.239078   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:02.239098   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.239208   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:02.239398   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:02.239576   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:02.239738   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 21:59:02.323484   25747 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 21:59:02.327005   25747 command_runner.go:130] > NAME=Buildroot
	I0914 21:59:02.327029   25747 command_runner.go:130] > VERSION=2021.02.12-1-g52d8811-dirty
	I0914 21:59:02.327036   25747 command_runner.go:130] > ID=buildroot
	I0914 21:59:02.327044   25747 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 21:59:02.327052   25747 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 21:59:02.327097   25747 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 21:59:02.327119   25747 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 21:59:02.327177   25747 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 21:59:02.327276   25747 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 21:59:02.327289   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /etc/ssl/certs/134852.pem
	I0914 21:59:02.327368   25747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 21:59:02.334723   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 21:59:02.355211   25747 start.go:303] post-start completed in 118.921529ms
	I0914 21:59:02.355272   25747 main.go:141] libmachine: (multinode-124911) Calling .GetConfigRaw
	I0914 21:59:02.355834   25747 main.go:141] libmachine: (multinode-124911) Calling .GetIP
	I0914 21:59:02.359573   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.359997   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:02.360036   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.360257   25747 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 21:59:02.360474   25747 start.go:128] duration metric: createHost completed in 24.997773316s
	I0914 21:59:02.360497   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:02.362839   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.363135   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:02.363162   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.363657   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:02.363829   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:02.363974   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:02.364080   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:02.364215   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 21:59:02.364518   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 21:59:02.364529   25747 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 21:59:02.479733   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694728742.456288650
	
	I0914 21:59:02.479754   25747 fix.go:206] guest clock: 1694728742.456288650
	I0914 21:59:02.479761   25747 fix.go:219] Guest: 2023-09-14 21:59:02.45628865 +0000 UTC Remote: 2023-09-14 21:59:02.360486169 +0000 UTC m=+25.096449900 (delta=95.802481ms)
	I0914 21:59:02.479778   25747 fix.go:190] guest clock delta is within tolerance: 95.802481ms
	I0914 21:59:02.479783   25747 start.go:83] releasing machines lock for "multinode-124911", held for 25.117164365s
	I0914 21:59:02.479803   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 21:59:02.480105   25747 main.go:141] libmachine: (multinode-124911) Calling .GetIP
	I0914 21:59:02.482631   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.482979   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:02.483008   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.483210   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 21:59:02.483654   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 21:59:02.483810   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 21:59:02.483919   25747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 21:59:02.483960   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:02.483974   25747 ssh_runner.go:195] Run: cat /version.json
	I0914 21:59:02.483992   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:02.486820   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.487023   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.487173   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:02.487205   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.487312   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:02.487319   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:02.487357   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:02.487521   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:02.487538   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:02.487715   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:02.487730   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:02.487876   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:02.487887   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 21:59:02.488003   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 21:59:02.596612   25747 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 21:59:02.596676   25747 command_runner.go:130] > {"iso_version": "v1.31.0-1694625400-17243", "kicbase_version": "v0.0.40-1694457807-17194", "minikube_version": "v1.31.2", "commit": "b8afb9b4a853f4e7882dbdfb53995784a48fcea7"}
	I0914 21:59:02.596813   25747 ssh_runner.go:195] Run: systemctl --version
	I0914 21:59:02.601947   25747 command_runner.go:130] > systemd 247 (247)
	I0914 21:59:02.601975   25747 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0914 21:59:02.602030   25747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 21:59:02.755956   25747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 21:59:02.761590   25747 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 21:59:02.761644   25747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 21:59:02.761695   25747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 21:59:02.780041   25747 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0914 21:59:02.780097   25747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 21:59:02.780106   25747 start.go:469] detecting cgroup driver to use...
	I0914 21:59:02.780160   25747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 21:59:02.793102   25747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 21:59:02.804744   25747 docker.go:196] disabling cri-docker service (if available) ...
	I0914 21:59:02.804790   25747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 21:59:02.816647   25747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 21:59:02.828437   25747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 21:59:02.928523   25747 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0914 21:59:02.928606   25747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 21:59:03.037688   25747 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0914 21:59:03.037721   25747 docker.go:212] disabling docker service ...
	I0914 21:59:03.037763   25747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 21:59:03.050442   25747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 21:59:03.061486   25747 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0914 21:59:03.061572   25747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 21:59:03.073647   25747 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0914 21:59:03.162029   25747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 21:59:03.269408   25747 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0914 21:59:03.269434   25747 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0914 21:59:03.269485   25747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 21:59:03.280762   25747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 21:59:03.295652   25747 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 21:59:03.296108   25747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 21:59:03.296169   25747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:59:03.304199   25747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 21:59:03.304263   25747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:59:03.312260   25747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:59:03.320445   25747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 21:59:03.328558   25747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 21:59:03.336830   25747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 21:59:03.344497   25747 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 21:59:03.344615   25747 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 21:59:03.344662   25747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 21:59:03.355156   25747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 21:59:03.362483   25747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 21:59:03.461119   25747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 21:59:03.616408   25747 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 21:59:03.616495   25747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 21:59:03.621191   25747 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 21:59:03.621216   25747 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 21:59:03.621225   25747 command_runner.go:130] > Device: 16h/22d	Inode: 720         Links: 1
	I0914 21:59:03.621235   25747 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 21:59:03.621247   25747 command_runner.go:130] > Access: 2023-09-14 21:59:03.578628525 +0000
	I0914 21:59:03.621257   25747 command_runner.go:130] > Modify: 2023-09-14 21:59:03.578628525 +0000
	I0914 21:59:03.621270   25747 command_runner.go:130] > Change: 2023-09-14 21:59:03.578628525 +0000
	I0914 21:59:03.621278   25747 command_runner.go:130] >  Birth: -
	I0914 21:59:03.621303   25747 start.go:537] Will wait 60s for crictl version
	I0914 21:59:03.621348   25747 ssh_runner.go:195] Run: which crictl
	I0914 21:59:03.624928   25747 command_runner.go:130] > /usr/bin/crictl
	I0914 21:59:03.625076   25747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 21:59:03.651551   25747 command_runner.go:130] > Version:  0.1.0
	I0914 21:59:03.651572   25747 command_runner.go:130] > RuntimeName:  cri-o
	I0914 21:59:03.651580   25747 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0914 21:59:03.651588   25747 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0914 21:59:03.651608   25747 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 21:59:03.651672   25747 ssh_runner.go:195] Run: crio --version
	I0914 21:59:03.703112   25747 command_runner.go:130] > crio version 1.24.1
	I0914 21:59:03.703139   25747 command_runner.go:130] > Version:          1.24.1
	I0914 21:59:03.703149   25747 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0914 21:59:03.703155   25747 command_runner.go:130] > GitTreeState:     dirty
	I0914 21:59:03.703170   25747 command_runner.go:130] > BuildDate:        2023-09-13T22:47:54Z
	I0914 21:59:03.703178   25747 command_runner.go:130] > GoVersion:        go1.19.9
	I0914 21:59:03.703185   25747 command_runner.go:130] > Compiler:         gc
	I0914 21:59:03.703192   25747 command_runner.go:130] > Platform:         linux/amd64
	I0914 21:59:03.703201   25747 command_runner.go:130] > Linkmode:         dynamic
	I0914 21:59:03.703212   25747 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 21:59:03.703221   25747 command_runner.go:130] > SeccompEnabled:   true
	I0914 21:59:03.703227   25747 command_runner.go:130] > AppArmorEnabled:  false
	I0914 21:59:03.704538   25747 ssh_runner.go:195] Run: crio --version
	I0914 21:59:03.742931   25747 command_runner.go:130] > crio version 1.24.1
	I0914 21:59:03.742955   25747 command_runner.go:130] > Version:          1.24.1
	I0914 21:59:03.742963   25747 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0914 21:59:03.742967   25747 command_runner.go:130] > GitTreeState:     dirty
	I0914 21:59:03.742973   25747 command_runner.go:130] > BuildDate:        2023-09-13T22:47:54Z
	I0914 21:59:03.742977   25747 command_runner.go:130] > GoVersion:        go1.19.9
	I0914 21:59:03.742981   25747 command_runner.go:130] > Compiler:         gc
	I0914 21:59:03.742986   25747 command_runner.go:130] > Platform:         linux/amd64
	I0914 21:59:03.742991   25747 command_runner.go:130] > Linkmode:         dynamic
	I0914 21:59:03.742998   25747 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 21:59:03.743002   25747 command_runner.go:130] > SeccompEnabled:   true
	I0914 21:59:03.743006   25747 command_runner.go:130] > AppArmorEnabled:  false
	I0914 21:59:03.746706   25747 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 21:59:03.747956   25747 main.go:141] libmachine: (multinode-124911) Calling .GetIP
	I0914 21:59:03.750466   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:03.750885   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:03.750918   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:03.751123   25747 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 21:59:03.754836   25747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 21:59:03.766335   25747 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 21:59:03.766393   25747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 21:59:03.789115   25747 command_runner.go:130] > {
	I0914 21:59:03.789136   25747 command_runner.go:130] >   "images": [
	I0914 21:59:03.789142   25747 command_runner.go:130] >   ]
	I0914 21:59:03.789147   25747 command_runner.go:130] > }
	I0914 21:59:03.790269   25747 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 21:59:03.790318   25747 ssh_runner.go:195] Run: which lz4
	I0914 21:59:03.793526   25747 command_runner.go:130] > /usr/bin/lz4
	I0914 21:59:03.793882   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0914 21:59:03.793965   25747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 21:59:03.797602   25747 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 21:59:03.797812   25747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 21:59:03.797836   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 21:59:05.392624   25747 crio.go:444] Took 1.598687 seconds to copy over tarball
	I0914 21:59:05.392682   25747 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 21:59:07.888705   25747 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.495995885s)
	I0914 21:59:07.888736   25747 crio.go:451] Took 2.496087 seconds to extract the tarball
	I0914 21:59:07.888746   25747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 21:59:07.928326   25747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 21:59:07.985297   25747 command_runner.go:130] > {
	I0914 21:59:07.985322   25747 command_runner.go:130] >   "images": [
	I0914 21:59:07.985327   25747 command_runner.go:130] >     {
	I0914 21:59:07.985335   25747 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0914 21:59:07.985340   25747 command_runner.go:130] >       "repoTags": [
	I0914 21:59:07.985346   25747 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0914 21:59:07.985350   25747 command_runner.go:130] >       ],
	I0914 21:59:07.985354   25747 command_runner.go:130] >       "repoDigests": [
	I0914 21:59:07.985392   25747 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0914 21:59:07.985412   25747 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0914 21:59:07.985418   25747 command_runner.go:130] >       ],
	I0914 21:59:07.985424   25747 command_runner.go:130] >       "size": "65249302",
	I0914 21:59:07.985432   25747 command_runner.go:130] >       "uid": null,
	I0914 21:59:07.985436   25747 command_runner.go:130] >       "username": "",
	I0914 21:59:07.985444   25747 command_runner.go:130] >       "spec": null
	I0914 21:59:07.985450   25747 command_runner.go:130] >     },
	I0914 21:59:07.985454   25747 command_runner.go:130] >     {
	I0914 21:59:07.985463   25747 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0914 21:59:07.985470   25747 command_runner.go:130] >       "repoTags": [
	I0914 21:59:07.985483   25747 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 21:59:07.985489   25747 command_runner.go:130] >       ],
	I0914 21:59:07.985498   25747 command_runner.go:130] >       "repoDigests": [
	I0914 21:59:07.985511   25747 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0914 21:59:07.985527   25747 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0914 21:59:07.985537   25747 command_runner.go:130] >       ],
	I0914 21:59:07.985544   25747 command_runner.go:130] >       "size": "31470524",
	I0914 21:59:07.985552   25747 command_runner.go:130] >       "uid": null,
	I0914 21:59:07.985558   25747 command_runner.go:130] >       "username": "",
	I0914 21:59:07.985564   25747 command_runner.go:130] >       "spec": null
	I0914 21:59:07.985569   25747 command_runner.go:130] >     },
	I0914 21:59:07.985577   25747 command_runner.go:130] >     {
	I0914 21:59:07.985588   25747 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0914 21:59:07.985599   25747 command_runner.go:130] >       "repoTags": [
	I0914 21:59:07.985610   25747 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0914 21:59:07.985622   25747 command_runner.go:130] >       ],
	I0914 21:59:07.985632   25747 command_runner.go:130] >       "repoDigests": [
	I0914 21:59:07.985645   25747 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0914 21:59:07.985660   25747 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0914 21:59:07.985669   25747 command_runner.go:130] >       ],
	I0914 21:59:07.985673   25747 command_runner.go:130] >       "size": "53621675",
	I0914 21:59:07.985682   25747 command_runner.go:130] >       "uid": null,
	I0914 21:59:07.985690   25747 command_runner.go:130] >       "username": "",
	I0914 21:59:07.985701   25747 command_runner.go:130] >       "spec": null
	I0914 21:59:07.985707   25747 command_runner.go:130] >     },
	I0914 21:59:07.985717   25747 command_runner.go:130] >     {
	I0914 21:59:07.985739   25747 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0914 21:59:07.985753   25747 command_runner.go:130] >       "repoTags": [
	I0914 21:59:07.985762   25747 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0914 21:59:07.985769   25747 command_runner.go:130] >       ],
	I0914 21:59:07.985774   25747 command_runner.go:130] >       "repoDigests": [
	I0914 21:59:07.985788   25747 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0914 21:59:07.985804   25747 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0914 21:59:07.985814   25747 command_runner.go:130] >       ],
	I0914 21:59:07.985822   25747 command_runner.go:130] >       "size": "295456551",
	I0914 21:59:07.985832   25747 command_runner.go:130] >       "uid": {
	I0914 21:59:07.985840   25747 command_runner.go:130] >         "value": "0"
	I0914 21:59:07.985858   25747 command_runner.go:130] >       },
	I0914 21:59:07.985867   25747 command_runner.go:130] >       "username": "",
	I0914 21:59:07.985872   25747 command_runner.go:130] >       "spec": null
	I0914 21:59:07.985882   25747 command_runner.go:130] >     },
	I0914 21:59:07.985888   25747 command_runner.go:130] >     {
	I0914 21:59:07.985901   25747 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0914 21:59:07.985908   25747 command_runner.go:130] >       "repoTags": [
	I0914 21:59:07.985922   25747 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0914 21:59:07.985931   25747 command_runner.go:130] >       ],
	I0914 21:59:07.985939   25747 command_runner.go:130] >       "repoDigests": [
	I0914 21:59:07.985954   25747 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0914 21:59:07.985969   25747 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0914 21:59:07.985977   25747 command_runner.go:130] >       ],
	I0914 21:59:07.985983   25747 command_runner.go:130] >       "size": "126972880",
	I0914 21:59:07.985992   25747 command_runner.go:130] >       "uid": {
	I0914 21:59:07.986003   25747 command_runner.go:130] >         "value": "0"
	I0914 21:59:07.986010   25747 command_runner.go:130] >       },
	I0914 21:59:07.986020   25747 command_runner.go:130] >       "username": "",
	I0914 21:59:07.986031   25747 command_runner.go:130] >       "spec": null
	I0914 21:59:07.986038   25747 command_runner.go:130] >     },
	I0914 21:59:07.986047   25747 command_runner.go:130] >     {
	I0914 21:59:07.986058   25747 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0914 21:59:07.986071   25747 command_runner.go:130] >       "repoTags": [
	I0914 21:59:07.986083   25747 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0914 21:59:07.986093   25747 command_runner.go:130] >       ],
	I0914 21:59:07.986104   25747 command_runner.go:130] >       "repoDigests": [
	I0914 21:59:07.986118   25747 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0914 21:59:07.986135   25747 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0914 21:59:07.986145   25747 command_runner.go:130] >       ],
	I0914 21:59:07.986156   25747 command_runner.go:130] >       "size": "123163446",
	I0914 21:59:07.986163   25747 command_runner.go:130] >       "uid": {
	I0914 21:59:07.986173   25747 command_runner.go:130] >         "value": "0"
	I0914 21:59:07.986179   25747 command_runner.go:130] >       },
	I0914 21:59:07.986186   25747 command_runner.go:130] >       "username": "",
	I0914 21:59:07.986197   25747 command_runner.go:130] >       "spec": null
	I0914 21:59:07.986204   25747 command_runner.go:130] >     },
	I0914 21:59:07.986213   25747 command_runner.go:130] >     {
	I0914 21:59:07.986223   25747 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0914 21:59:07.986233   25747 command_runner.go:130] >       "repoTags": [
	I0914 21:59:07.986242   25747 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0914 21:59:07.986251   25747 command_runner.go:130] >       ],
	I0914 21:59:07.986258   25747 command_runner.go:130] >       "repoDigests": [
	I0914 21:59:07.986271   25747 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0914 21:59:07.986287   25747 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0914 21:59:07.986297   25747 command_runner.go:130] >       ],
	I0914 21:59:07.986304   25747 command_runner.go:130] >       "size": "74680215",
	I0914 21:59:07.986314   25747 command_runner.go:130] >       "uid": null,
	I0914 21:59:07.986322   25747 command_runner.go:130] >       "username": "",
	I0914 21:59:07.986332   25747 command_runner.go:130] >       "spec": null
	I0914 21:59:07.986341   25747 command_runner.go:130] >     },
	I0914 21:59:07.986348   25747 command_runner.go:130] >     {
	I0914 21:59:07.986355   25747 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0914 21:59:07.986364   25747 command_runner.go:130] >       "repoTags": [
	I0914 21:59:07.986373   25747 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0914 21:59:07.986382   25747 command_runner.go:130] >       ],
	I0914 21:59:07.986388   25747 command_runner.go:130] >       "repoDigests": [
	I0914 21:59:07.986398   25747 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0914 21:59:07.986456   25747 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0914 21:59:07.986468   25747 command_runner.go:130] >       ],
	I0914 21:59:07.986475   25747 command_runner.go:130] >       "size": "61477686",
	I0914 21:59:07.986483   25747 command_runner.go:130] >       "uid": {
	I0914 21:59:07.986493   25747 command_runner.go:130] >         "value": "0"
	I0914 21:59:07.986500   25747 command_runner.go:130] >       },
	I0914 21:59:07.986509   25747 command_runner.go:130] >       "username": "",
	I0914 21:59:07.986516   25747 command_runner.go:130] >       "spec": null
	I0914 21:59:07.986525   25747 command_runner.go:130] >     },
	I0914 21:59:07.986531   25747 command_runner.go:130] >     {
	I0914 21:59:07.986544   25747 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0914 21:59:07.986551   25747 command_runner.go:130] >       "repoTags": [
	I0914 21:59:07.986563   25747 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0914 21:59:07.986573   25747 command_runner.go:130] >       ],
	I0914 21:59:07.986580   25747 command_runner.go:130] >       "repoDigests": [
	I0914 21:59:07.986594   25747 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0914 21:59:07.986609   25747 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0914 21:59:07.986619   25747 command_runner.go:130] >       ],
	I0914 21:59:07.986626   25747 command_runner.go:130] >       "size": "750414",
	I0914 21:59:07.986637   25747 command_runner.go:130] >       "uid": {
	I0914 21:59:07.986644   25747 command_runner.go:130] >         "value": "65535"
	I0914 21:59:07.986653   25747 command_runner.go:130] >       },
	I0914 21:59:07.986661   25747 command_runner.go:130] >       "username": "",
	I0914 21:59:07.986671   25747 command_runner.go:130] >       "spec": null
	I0914 21:59:07.986678   25747 command_runner.go:130] >     }
	I0914 21:59:07.986682   25747 command_runner.go:130] >   ]
	I0914 21:59:07.986686   25747 command_runner.go:130] > }
	I0914 21:59:07.986806   25747 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 21:59:07.986817   25747 cache_images.go:84] Images are preloaded, skipping loading
	I0914 21:59:07.986870   25747 ssh_runner.go:195] Run: crio config
	I0914 21:59:08.032786   25747 command_runner.go:130] ! time="2023-09-14 21:59:08.015536045Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0914 21:59:08.032815   25747 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 21:59:08.043283   25747 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 21:59:08.043308   25747 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 21:59:08.043320   25747 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 21:59:08.043326   25747 command_runner.go:130] > #
	I0914 21:59:08.043338   25747 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 21:59:08.043344   25747 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 21:59:08.043355   25747 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 21:59:08.043365   25747 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 21:59:08.043373   25747 command_runner.go:130] > # reload'.
	I0914 21:59:08.043383   25747 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 21:59:08.043394   25747 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 21:59:08.043411   25747 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 21:59:08.043421   25747 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 21:59:08.043427   25747 command_runner.go:130] > [crio]
	I0914 21:59:08.043441   25747 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 21:59:08.043450   25747 command_runner.go:130] > # containers images, in this directory.
	I0914 21:59:08.043457   25747 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0914 21:59:08.043481   25747 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 21:59:08.043491   25747 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0914 21:59:08.043501   25747 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 21:59:08.043516   25747 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 21:59:08.043525   25747 command_runner.go:130] > storage_driver = "overlay"
	I0914 21:59:08.043535   25747 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 21:59:08.043549   25747 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 21:59:08.043555   25747 command_runner.go:130] > storage_option = [
	I0914 21:59:08.043560   25747 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0914 21:59:08.043567   25747 command_runner.go:130] > ]
	I0914 21:59:08.043575   25747 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 21:59:08.043585   25747 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 21:59:08.043595   25747 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 21:59:08.043604   25747 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 21:59:08.043618   25747 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 21:59:08.043628   25747 command_runner.go:130] > # always happen on a node reboot
	I0914 21:59:08.043637   25747 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 21:59:08.043649   25747 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 21:59:08.043658   25747 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 21:59:08.043670   25747 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 21:59:08.043683   25747 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0914 21:59:08.043698   25747 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 21:59:08.043714   25747 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 21:59:08.043725   25747 command_runner.go:130] > # internal_wipe = true
	I0914 21:59:08.043736   25747 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 21:59:08.043757   25747 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 21:59:08.043772   25747 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 21:59:08.043785   25747 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 21:59:08.043795   25747 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 21:59:08.043805   25747 command_runner.go:130] > [crio.api]
	I0914 21:59:08.043814   25747 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 21:59:08.043823   25747 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 21:59:08.043829   25747 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 21:59:08.043840   25747 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 21:59:08.043856   25747 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 21:59:08.043868   25747 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 21:59:08.043878   25747 command_runner.go:130] > # stream_port = "0"
	I0914 21:59:08.043887   25747 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 21:59:08.043897   25747 command_runner.go:130] > # stream_enable_tls = false
	I0914 21:59:08.043907   25747 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 21:59:08.043914   25747 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 21:59:08.043924   25747 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 21:59:08.043938   25747 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 21:59:08.043948   25747 command_runner.go:130] > # minutes.
	I0914 21:59:08.043955   25747 command_runner.go:130] > # stream_tls_cert = ""
	I0914 21:59:08.043968   25747 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 21:59:08.043981   25747 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 21:59:08.043991   25747 command_runner.go:130] > # stream_tls_key = ""
	I0914 21:59:08.043998   25747 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 21:59:08.044010   25747 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 21:59:08.044023   25747 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 21:59:08.044033   25747 command_runner.go:130] > # stream_tls_ca = ""
	I0914 21:59:08.044049   25747 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 21:59:08.044059   25747 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0914 21:59:08.044074   25747 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 21:59:08.044083   25747 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0914 21:59:08.044116   25747 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 21:59:08.044131   25747 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 21:59:08.044137   25747 command_runner.go:130] > [crio.runtime]
	I0914 21:59:08.044148   25747 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 21:59:08.044160   25747 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 21:59:08.044168   25747 command_runner.go:130] > # "nofile=1024:2048"
	I0914 21:59:08.044174   25747 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 21:59:08.044184   25747 command_runner.go:130] > # default_ulimits = [
	I0914 21:59:08.044190   25747 command_runner.go:130] > # ]
	I0914 21:59:08.044202   25747 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 21:59:08.044213   25747 command_runner.go:130] > # no_pivot = false
	I0914 21:59:08.044222   25747 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 21:59:08.044235   25747 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 21:59:08.044247   25747 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 21:59:08.044256   25747 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 21:59:08.044267   25747 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 21:59:08.044282   25747 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 21:59:08.044293   25747 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0914 21:59:08.044301   25747 command_runner.go:130] > # Cgroup setting for conmon
	I0914 21:59:08.044315   25747 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 21:59:08.044325   25747 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 21:59:08.044335   25747 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 21:59:08.044344   25747 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 21:59:08.044353   25747 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 21:59:08.044364   25747 command_runner.go:130] > conmon_env = [
	I0914 21:59:08.044374   25747 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 21:59:08.044384   25747 command_runner.go:130] > ]
	I0914 21:59:08.044393   25747 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 21:59:08.044404   25747 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 21:59:08.044417   25747 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 21:59:08.044426   25747 command_runner.go:130] > # default_env = [
	I0914 21:59:08.044429   25747 command_runner.go:130] > # ]
	I0914 21:59:08.044437   25747 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 21:59:08.044447   25747 command_runner.go:130] > # selinux = false
	I0914 21:59:08.044458   25747 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 21:59:08.044472   25747 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 21:59:08.044485   25747 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 21:59:08.044495   25747 command_runner.go:130] > # seccomp_profile = ""
	I0914 21:59:08.044504   25747 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 21:59:08.044514   25747 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 21:59:08.044521   25747 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 21:59:08.044532   25747 command_runner.go:130] > # which might increase security.
	I0914 21:59:08.044544   25747 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0914 21:59:08.044557   25747 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 21:59:08.044570   25747 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 21:59:08.044584   25747 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 21:59:08.044593   25747 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 21:59:08.044602   25747 command_runner.go:130] > # This option supports live configuration reload.
	I0914 21:59:08.044608   25747 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 21:59:08.044623   25747 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 21:59:08.044634   25747 command_runner.go:130] > # the cgroup blockio controller.
	I0914 21:59:08.044645   25747 command_runner.go:130] > # blockio_config_file = ""
	I0914 21:59:08.044658   25747 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 21:59:08.044668   25747 command_runner.go:130] > # irqbalance daemon.
	I0914 21:59:08.044680   25747 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 21:59:08.044687   25747 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 21:59:08.044698   25747 command_runner.go:130] > # This option supports live configuration reload.
	I0914 21:59:08.044707   25747 command_runner.go:130] > # rdt_config_file = ""
	I0914 21:59:08.044720   25747 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 21:59:08.044730   25747 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 21:59:08.044748   25747 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 21:59:08.044758   25747 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 21:59:08.044770   25747 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 21:59:08.044780   25747 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 21:59:08.044787   25747 command_runner.go:130] > # will be added.
	I0914 21:59:08.044798   25747 command_runner.go:130] > # default_capabilities = [
	I0914 21:59:08.044806   25747 command_runner.go:130] > # 	"CHOWN",
	I0914 21:59:08.044816   25747 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 21:59:08.044823   25747 command_runner.go:130] > # 	"FSETID",
	I0914 21:59:08.044832   25747 command_runner.go:130] > # 	"FOWNER",
	I0914 21:59:08.044839   25747 command_runner.go:130] > # 	"SETGID",
	I0914 21:59:08.044849   25747 command_runner.go:130] > # 	"SETUID",
	I0914 21:59:08.044855   25747 command_runner.go:130] > # 	"SETPCAP",
	I0914 21:59:08.044862   25747 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 21:59:08.044867   25747 command_runner.go:130] > # 	"KILL",
	I0914 21:59:08.044872   25747 command_runner.go:130] > # ]
	I0914 21:59:08.044886   25747 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 21:59:08.044899   25747 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 21:59:08.044909   25747 command_runner.go:130] > # default_sysctls = [
	I0914 21:59:08.044919   25747 command_runner.go:130] > # ]
	I0914 21:59:08.044927   25747 command_runner.go:130] > # List of devices on the host that a
	I0914 21:59:08.044940   25747 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 21:59:08.044947   25747 command_runner.go:130] > # allowed_devices = [
	I0914 21:59:08.044951   25747 command_runner.go:130] > # 	"/dev/fuse",
	I0914 21:59:08.044958   25747 command_runner.go:130] > # ]
	I0914 21:59:08.044975   25747 command_runner.go:130] > # List of additional devices. specified as
	I0914 21:59:08.044991   25747 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 21:59:08.045003   25747 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 21:59:08.045027   25747 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 21:59:08.045034   25747 command_runner.go:130] > # additional_devices = [
	I0914 21:59:08.045039   25747 command_runner.go:130] > # ]
	I0914 21:59:08.045052   25747 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 21:59:08.045059   25747 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 21:59:08.045069   25747 command_runner.go:130] > # 	"/etc/cdi",
	I0914 21:59:08.045076   25747 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 21:59:08.045084   25747 command_runner.go:130] > # ]
	I0914 21:59:08.045095   25747 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 21:59:08.045108   25747 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 21:59:08.045116   25747 command_runner.go:130] > # Defaults to false.
	I0914 21:59:08.045121   25747 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 21:59:08.045135   25747 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 21:59:08.045149   25747 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 21:59:08.045159   25747 command_runner.go:130] > # hooks_dir = [
	I0914 21:59:08.045167   25747 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 21:59:08.045176   25747 command_runner.go:130] > # ]
	I0914 21:59:08.045187   25747 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 21:59:08.045198   25747 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 21:59:08.045206   25747 command_runner.go:130] > # its default mounts from the following two files:
	I0914 21:59:08.045212   25747 command_runner.go:130] > #
	I0914 21:59:08.045226   25747 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 21:59:08.045240   25747 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 21:59:08.045252   25747 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 21:59:08.045261   25747 command_runner.go:130] > #
	I0914 21:59:08.045273   25747 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 21:59:08.045285   25747 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 21:59:08.045295   25747 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 21:59:08.045303   25747 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 21:59:08.045312   25747 command_runner.go:130] > #
	I0914 21:59:08.045321   25747 command_runner.go:130] > # default_mounts_file = ""
	I0914 21:59:08.045333   25747 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 21:59:08.045345   25747 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 21:59:08.045355   25747 command_runner.go:130] > pids_limit = 1024
	I0914 21:59:08.045368   25747 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 21:59:08.045377   25747 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 21:59:08.045386   25747 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 21:59:08.045403   25747 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 21:59:08.045414   25747 command_runner.go:130] > # log_size_max = -1
	I0914 21:59:08.045428   25747 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0914 21:59:08.045439   25747 command_runner.go:130] > # log_to_journald = false
	I0914 21:59:08.045449   25747 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 21:59:08.045460   25747 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 21:59:08.045468   25747 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 21:59:08.045475   25747 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 21:59:08.045488   25747 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 21:59:08.045499   25747 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 21:59:08.045512   25747 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 21:59:08.045522   25747 command_runner.go:130] > # read_only = false
	I0914 21:59:08.045536   25747 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 21:59:08.045548   25747 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 21:59:08.045556   25747 command_runner.go:130] > # live configuration reload.
	I0914 21:59:08.045561   25747 command_runner.go:130] > # log_level = "info"
	I0914 21:59:08.045574   25747 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 21:59:08.045586   25747 command_runner.go:130] > # This option supports live configuration reload.
	I0914 21:59:08.045596   25747 command_runner.go:130] > # log_filter = ""
	I0914 21:59:08.045607   25747 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 21:59:08.045620   25747 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 21:59:08.045630   25747 command_runner.go:130] > # separated by comma.
	I0914 21:59:08.045636   25747 command_runner.go:130] > # uid_mappings = ""
	I0914 21:59:08.045645   25747 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 21:59:08.045655   25747 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 21:59:08.045665   25747 command_runner.go:130] > # separated by comma.
	I0914 21:59:08.045674   25747 command_runner.go:130] > # gid_mappings = ""
	I0914 21:59:08.045685   25747 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 21:59:08.045698   25747 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 21:59:08.045711   25747 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 21:59:08.045721   25747 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 21:59:08.045727   25747 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 21:59:08.045740   25747 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 21:59:08.045762   25747 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 21:59:08.045773   25747 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 21:59:08.045786   25747 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 21:59:08.045799   25747 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 21:59:08.045810   25747 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 21:59:08.045814   25747 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 21:59:08.045827   25747 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 21:59:08.045841   25747 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 21:59:08.045853   25747 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 21:59:08.045865   25747 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 21:59:08.045878   25747 command_runner.go:130] > drop_infra_ctr = false
	I0914 21:59:08.045892   25747 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 21:59:08.045900   25747 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 21:59:08.045911   25747 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 21:59:08.045922   25747 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 21:59:08.045935   25747 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 21:59:08.045947   25747 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 21:59:08.045958   25747 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 21:59:08.045973   25747 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 21:59:08.045981   25747 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0914 21:59:08.045987   25747 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 21:59:08.046000   25747 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0914 21:59:08.046015   25747 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0914 21:59:08.046025   25747 command_runner.go:130] > # default_runtime = "runc"
	I0914 21:59:08.046037   25747 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 21:59:08.046052   25747 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 21:59:08.046068   25747 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0914 21:59:08.046077   25747 command_runner.go:130] > # creation as a file is not desired either.
	I0914 21:59:08.046091   25747 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 21:59:08.046104   25747 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 21:59:08.046112   25747 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 21:59:08.046121   25747 command_runner.go:130] > # ]
	I0914 21:59:08.046132   25747 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 21:59:08.046145   25747 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 21:59:08.046156   25747 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0914 21:59:08.046169   25747 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0914 21:59:08.046178   25747 command_runner.go:130] > #
	I0914 21:59:08.046190   25747 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0914 21:59:08.046199   25747 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0914 21:59:08.046207   25747 command_runner.go:130] > #  runtime_type = "oci"
	I0914 21:59:08.046218   25747 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0914 21:59:08.046229   25747 command_runner.go:130] > #  privileged_without_host_devices = false
	I0914 21:59:08.046238   25747 command_runner.go:130] > #  allowed_annotations = []
	I0914 21:59:08.046242   25747 command_runner.go:130] > # Where:
	I0914 21:59:08.046252   25747 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0914 21:59:08.046268   25747 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0914 21:59:08.046282   25747 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 21:59:08.046295   25747 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 21:59:08.046305   25747 command_runner.go:130] > #   in $PATH.
	I0914 21:59:08.046315   25747 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0914 21:59:08.046325   25747 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 21:59:08.046332   25747 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0914 21:59:08.046341   25747 command_runner.go:130] > #   state.
	I0914 21:59:08.046353   25747 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 21:59:08.046366   25747 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 21:59:08.046379   25747 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 21:59:08.046389   25747 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 21:59:08.046402   25747 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 21:59:08.046412   25747 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 21:59:08.046421   25747 command_runner.go:130] > #   The currently recognized values are:
	I0914 21:59:08.046434   25747 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 21:59:08.046449   25747 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 21:59:08.046462   25747 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 21:59:08.046474   25747 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 21:59:08.046490   25747 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 21:59:08.046516   25747 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 21:59:08.046534   25747 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 21:59:08.046548   25747 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0914 21:59:08.046560   25747 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 21:59:08.046571   25747 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 21:59:08.046581   25747 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0914 21:59:08.046587   25747 command_runner.go:130] > runtime_type = "oci"
	I0914 21:59:08.046596   25747 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 21:59:08.046604   25747 command_runner.go:130] > runtime_config_path = ""
	I0914 21:59:08.046615   25747 command_runner.go:130] > monitor_path = ""
	I0914 21:59:08.046625   25747 command_runner.go:130] > monitor_cgroup = ""
	I0914 21:59:08.046632   25747 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 21:59:08.046646   25747 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0914 21:59:08.046656   25747 command_runner.go:130] > # running containers
	I0914 21:59:08.046666   25747 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0914 21:59:08.046675   25747 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0914 21:59:08.046709   25747 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0914 21:59:08.046723   25747 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0914 21:59:08.046732   25747 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0914 21:59:08.046748   25747 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0914 21:59:08.046758   25747 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0914 21:59:08.046765   25747 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0914 21:59:08.046773   25747 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0914 21:59:08.046785   25747 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0914 21:59:08.046800   25747 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 21:59:08.046812   25747 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 21:59:08.046825   25747 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 21:59:08.046842   25747 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 21:59:08.046854   25747 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 21:59:08.046868   25747 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 21:59:08.046887   25747 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 21:59:08.046902   25747 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 21:59:08.046915   25747 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 21:59:08.046928   25747 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 21:59:08.046933   25747 command_runner.go:130] > # Example:
	I0914 21:59:08.046943   25747 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 21:59:08.046956   25747 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 21:59:08.046968   25747 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 21:59:08.046980   25747 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 21:59:08.046989   25747 command_runner.go:130] > # cpuset = 0
	I0914 21:59:08.046996   25747 command_runner.go:130] > # cpushares = "0-1"
	I0914 21:59:08.047005   25747 command_runner.go:130] > # Where:
	I0914 21:59:08.047015   25747 command_runner.go:130] > # The workload name is workload-type.
	I0914 21:59:08.047023   25747 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 21:59:08.047036   25747 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 21:59:08.047046   25747 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 21:59:08.047062   25747 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 21:59:08.047075   25747 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 21:59:08.047084   25747 command_runner.go:130] > # 
	I0914 21:59:08.047094   25747 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 21:59:08.047101   25747 command_runner.go:130] > #
	I0914 21:59:08.047108   25747 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 21:59:08.047121   25747 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 21:59:08.047135   25747 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 21:59:08.047147   25747 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 21:59:08.047160   25747 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 21:59:08.047170   25747 command_runner.go:130] > [crio.image]
	I0914 21:59:08.047180   25747 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 21:59:08.047188   25747 command_runner.go:130] > # default_transport = "docker://"
	I0914 21:59:08.047197   25747 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 21:59:08.047233   25747 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 21:59:08.047246   25747 command_runner.go:130] > # global_auth_file = ""
	I0914 21:59:08.047255   25747 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 21:59:08.047266   25747 command_runner.go:130] > # This option supports live configuration reload.
	I0914 21:59:08.047273   25747 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0914 21:59:08.047281   25747 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 21:59:08.047294   25747 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 21:59:08.047306   25747 command_runner.go:130] > # This option supports live configuration reload.
	I0914 21:59:08.047317   25747 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 21:59:08.047330   25747 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 21:59:08.047343   25747 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 21:59:08.047353   25747 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 21:59:08.047360   25747 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 21:59:08.047364   25747 command_runner.go:130] > # pause_command = "/pause"
	I0914 21:59:08.047374   25747 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 21:59:08.047385   25747 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 21:59:08.047396   25747 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 21:59:08.047406   25747 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 21:59:08.047416   25747 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 21:59:08.047423   25747 command_runner.go:130] > # signature_policy = ""
	I0914 21:59:08.047433   25747 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 21:59:08.047442   25747 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 21:59:08.047446   25747 command_runner.go:130] > # changing them here.
	I0914 21:59:08.047450   25747 command_runner.go:130] > # insecure_registries = [
	I0914 21:59:08.047455   25747 command_runner.go:130] > # ]
	I0914 21:59:08.047479   25747 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 21:59:08.047488   25747 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 21:59:08.047496   25747 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 21:59:08.047504   25747 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 21:59:08.047512   25747 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 21:59:08.047522   25747 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 21:59:08.047529   25747 command_runner.go:130] > # CNI plugins.
	I0914 21:59:08.047536   25747 command_runner.go:130] > [crio.network]
	I0914 21:59:08.047543   25747 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 21:59:08.047549   25747 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 21:59:08.047556   25747 command_runner.go:130] > # cni_default_network = ""
	I0914 21:59:08.047566   25747 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 21:59:08.047574   25747 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 21:59:08.047587   25747 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 21:59:08.047597   25747 command_runner.go:130] > # plugin_dirs = [
	I0914 21:59:08.047604   25747 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 21:59:08.047612   25747 command_runner.go:130] > # ]
	I0914 21:59:08.047622   25747 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 21:59:08.047630   25747 command_runner.go:130] > [crio.metrics]
	I0914 21:59:08.047636   25747 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 21:59:08.047643   25747 command_runner.go:130] > enable_metrics = true
	I0914 21:59:08.047648   25747 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 21:59:08.047654   25747 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 21:59:08.047663   25747 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 21:59:08.047678   25747 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 21:59:08.047690   25747 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 21:59:08.047701   25747 command_runner.go:130] > # metrics_collectors = [
	I0914 21:59:08.047709   25747 command_runner.go:130] > # 	"operations",
	I0914 21:59:08.047721   25747 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 21:59:08.047733   25747 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 21:59:08.047740   25747 command_runner.go:130] > # 	"operations_errors",
	I0914 21:59:08.047747   25747 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 21:59:08.047753   25747 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 21:59:08.047758   25747 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 21:59:08.047764   25747 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 21:59:08.047769   25747 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 21:59:08.047775   25747 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 21:59:08.047779   25747 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 21:59:08.047783   25747 command_runner.go:130] > # 	"containers_oom_total",
	I0914 21:59:08.047789   25747 command_runner.go:130] > # 	"containers_oom",
	I0914 21:59:08.047793   25747 command_runner.go:130] > # 	"processes_defunct",
	I0914 21:59:08.047798   25747 command_runner.go:130] > # 	"operations_total",
	I0914 21:59:08.047803   25747 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 21:59:08.047814   25747 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 21:59:08.047825   25747 command_runner.go:130] > # 	"operations_errors_total",
	I0914 21:59:08.047836   25747 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 21:59:08.047845   25747 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 21:59:08.047856   25747 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 21:59:08.047866   25747 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 21:59:08.047876   25747 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 21:59:08.047882   25747 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 21:59:08.047885   25747 command_runner.go:130] > # ]
	I0914 21:59:08.047891   25747 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 21:59:08.047898   25747 command_runner.go:130] > # metrics_port = 9090
	I0914 21:59:08.047903   25747 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 21:59:08.047909   25747 command_runner.go:130] > # metrics_socket = ""
	I0914 21:59:08.047914   25747 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 21:59:08.047921   25747 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 21:59:08.047927   25747 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 21:59:08.047934   25747 command_runner.go:130] > # certificate on any modification event.
	I0914 21:59:08.047938   25747 command_runner.go:130] > # metrics_cert = ""
	I0914 21:59:08.047944   25747 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 21:59:08.047949   25747 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 21:59:08.047955   25747 command_runner.go:130] > # metrics_key = ""
	I0914 21:59:08.047961   25747 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 21:59:08.047968   25747 command_runner.go:130] > [crio.tracing]
	I0914 21:59:08.047973   25747 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 21:59:08.047980   25747 command_runner.go:130] > # enable_tracing = false
	I0914 21:59:08.047985   25747 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 21:59:08.047990   25747 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 21:59:08.047997   25747 command_runner.go:130] > # Number of samples to collect per million spans.
	I0914 21:59:08.048002   25747 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 21:59:08.048010   25747 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 21:59:08.048014   25747 command_runner.go:130] > [crio.stats]
	I0914 21:59:08.048022   25747 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 21:59:08.048028   25747 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 21:59:08.048039   25747 command_runner.go:130] > # stats_collection_period = 0
	I0914 21:59:08.048108   25747 cni.go:84] Creating CNI manager for ""
	I0914 21:59:08.048119   25747 cni.go:136] 1 nodes found, recommending kindnet
	I0914 21:59:08.048134   25747 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 21:59:08.048150   25747 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-124911 NodeName:multinode-124911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 21:59:08.048262   25747 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-124911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 21:59:08.048327   25747 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-124911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 21:59:08.048374   25747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 21:59:08.057357   25747 command_runner.go:130] > kubeadm
	I0914 21:59:08.057379   25747 command_runner.go:130] > kubectl
	I0914 21:59:08.057384   25747 command_runner.go:130] > kubelet
	I0914 21:59:08.057408   25747 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 21:59:08.057455   25747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 21:59:08.065793   25747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0914 21:59:08.082033   25747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 21:59:08.098314   25747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0914 21:59:08.114727   25747 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0914 21:59:08.118535   25747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 21:59:08.131829   25747 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911 for IP: 192.168.39.116
	I0914 21:59:08.131860   25747 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:59:08.131991   25747 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 21:59:08.132029   25747 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 21:59:08.132072   25747 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key
	I0914 21:59:08.132098   25747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt with IP's: []
	I0914 21:59:08.191569   25747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt ...
	I0914 21:59:08.191598   25747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt: {Name:mk73d9583d9cc46d70ebc2e41dc760ebd5095ec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:59:08.191772   25747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key ...
	I0914 21:59:08.191782   25747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key: {Name:mk88771e43fc0f0263b1d19e6ae1af6761c3b784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:59:08.191847   25747 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.key.12d79366
	I0914 21:59:08.191861   25747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.crt.12d79366 with IP's: [192.168.39.116 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 21:59:08.508692   25747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.crt.12d79366 ...
	I0914 21:59:08.508728   25747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.crt.12d79366: {Name:mkdb99652ebe8a9473f3a0f3bd9cf6e39c301a75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:59:08.508903   25747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.key.12d79366 ...
	I0914 21:59:08.508915   25747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.key.12d79366: {Name:mk947968430ef5e7c6a6526da3fdb1a25d4c2e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:59:08.508976   25747 certs.go:337] copying /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.crt.12d79366 -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.crt
	I0914 21:59:08.509038   25747 certs.go:341] copying /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.key.12d79366 -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.key
	I0914 21:59:08.509087   25747 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.key
	I0914 21:59:08.509100   25747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.crt with IP's: []
	I0914 21:59:08.585209   25747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.crt ...
	I0914 21:59:08.585234   25747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.crt: {Name:mkf5e1c0e43f1c58ccd11fb8eb1eff505fe6799e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:59:08.585371   25747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.key ...
	I0914 21:59:08.585381   25747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.key: {Name:mkd84f7bac72b1929170d640e68b7ab3726b0162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:59:08.585440   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 21:59:08.585457   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 21:59:08.585467   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 21:59:08.585477   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 21:59:08.585489   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 21:59:08.585502   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 21:59:08.585516   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 21:59:08.585528   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 21:59:08.585577   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 21:59:08.585620   25747 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 21:59:08.585630   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 21:59:08.585655   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 21:59:08.585677   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 21:59:08.585699   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 21:59:08.585742   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 21:59:08.585766   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:59:08.585779   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem -> /usr/share/ca-certificates/13485.pem
	I0914 21:59:08.585792   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /usr/share/ca-certificates/134852.pem
	I0914 21:59:08.586272   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 21:59:08.612415   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 21:59:08.634484   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 21:59:08.655045   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 21:59:08.675603   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 21:59:08.695494   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 21:59:08.716743   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 21:59:08.737797   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 21:59:08.757729   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 21:59:08.777732   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 21:59:08.798230   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 21:59:08.818490   25747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 21:59:08.832699   25747 ssh_runner.go:195] Run: openssl version
	I0914 21:59:08.837739   25747 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0914 21:59:08.838059   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 21:59:08.847594   25747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 21:59:08.851819   25747 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 21:59:08.851885   25747 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 21:59:08.851942   25747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 21:59:08.857026   25747 command_runner.go:130] > 51391683
	I0914 21:59:08.857098   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 21:59:08.866447   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 21:59:08.876022   25747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 21:59:08.880105   25747 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 21:59:08.880128   25747 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 21:59:08.880178   25747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 21:59:08.885076   25747 command_runner.go:130] > 3ec20f2e
	I0914 21:59:08.885317   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 21:59:08.894552   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 21:59:08.903812   25747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:59:08.907773   25747 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:59:08.907798   25747 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:59:08.907829   25747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 21:59:08.912878   25747 command_runner.go:130] > b5213941
	I0914 21:59:08.912930   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 21:59:08.922294   25747 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 21:59:08.925766   25747 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 21:59:08.925892   25747 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 21:59:08.925952   25747 kubeadm.go:404] StartCluster: {Name:multinode-124911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:59:08.926051   25747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 21:59:08.926104   25747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 21:59:08.953156   25747 cri.go:89] found id: ""
	I0914 21:59:08.953217   25747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 21:59:08.961654   25747 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0914 21:59:08.961677   25747 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0914 21:59:08.961683   25747 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0914 21:59:08.961940   25747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 21:59:08.970188   25747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 21:59:08.978555   25747 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0914 21:59:08.978587   25747 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0914 21:59:08.978599   25747 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0914 21:59:08.978610   25747 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 21:59:08.978645   25747 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 21:59:08.978676   25747 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 21:59:09.085894   25747 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 21:59:09.085916   25747 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
	I0914 21:59:09.085949   25747 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 21:59:09.085953   25747 command_runner.go:130] > [preflight] Running pre-flight checks
	I0914 21:59:09.306207   25747 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 21:59:09.306236   25747 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 21:59:09.306336   25747 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 21:59:09.306347   25747 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 21:59:09.306447   25747 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 21:59:09.306466   25747 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 21:59:09.474443   25747 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 21:59:09.474562   25747 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 21:59:09.715336   25747 out.go:204]   - Generating certificates and keys ...
	I0914 21:59:09.715458   25747 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0914 21:59:09.715505   25747 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 21:59:09.715635   25747 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 21:59:09.715647   25747 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0914 21:59:09.715714   25747 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 21:59:09.715725   25747 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 21:59:09.715776   25747 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 21:59:09.715783   25747 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0914 21:59:09.734257   25747 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 21:59:09.734280   25747 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0914 21:59:10.086120   25747 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 21:59:10.086146   25747 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0914 21:59:10.209836   25747 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 21:59:10.209865   25747 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0914 21:59:10.210011   25747 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-124911] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0914 21:59:10.210028   25747 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-124911] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0914 21:59:10.464367   25747 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 21:59:10.464401   25747 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0914 21:59:10.464529   25747 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-124911] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0914 21:59:10.464540   25747 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-124911] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0914 21:59:10.776946   25747 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 21:59:10.776976   25747 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 21:59:10.902985   25747 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 21:59:10.903033   25747 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 21:59:11.108539   25747 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 21:59:11.108570   25747 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0914 21:59:11.108733   25747 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 21:59:11.108759   25747 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 21:59:11.194107   25747 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 21:59:11.194134   25747 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 21:59:11.299073   25747 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 21:59:11.299098   25747 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 21:59:11.452429   25747 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 21:59:11.452445   25747 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 21:59:11.588861   25747 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 21:59:11.588906   25747 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 21:59:11.590075   25747 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 21:59:11.590105   25747 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 21:59:11.594350   25747 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 21:59:11.637714   25747 out.go:204]   - Booting up control plane ...
	I0914 21:59:11.594419   25747 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 21:59:11.637872   25747 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 21:59:11.637887   25747 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 21:59:11.637961   25747 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 21:59:11.637969   25747 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 21:59:11.638042   25747 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 21:59:11.638050   25747 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 21:59:11.638167   25747 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 21:59:11.638190   25747 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 21:59:11.638314   25747 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 21:59:11.638324   25747 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 21:59:11.638353   25747 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 21:59:11.638360   25747 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0914 21:59:11.752791   25747 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 21:59:11.752815   25747 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 21:59:19.251440   25747 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502570 seconds
	I0914 21:59:19.251479   25747 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.502570 seconds
	I0914 21:59:19.251610   25747 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 21:59:19.251626   25747 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 21:59:19.266782   25747 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 21:59:19.266815   25747 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 21:59:19.793695   25747 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 21:59:19.793722   25747 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0914 21:59:19.793962   25747 kubeadm.go:322] [mark-control-plane] Marking the node multinode-124911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 21:59:19.793979   25747 command_runner.go:130] > [mark-control-plane] Marking the node multinode-124911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 21:59:20.307986   25747 kubeadm.go:322] [bootstrap-token] Using token: m2ic5q.3jmhhvhcpl4eyssc
	I0914 21:59:20.309526   25747 out.go:204]   - Configuring RBAC rules ...
	I0914 21:59:20.308049   25747 command_runner.go:130] > [bootstrap-token] Using token: m2ic5q.3jmhhvhcpl4eyssc
	I0914 21:59:20.309657   25747 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 21:59:20.309671   25747 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 21:59:20.314970   25747 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 21:59:20.314987   25747 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 21:59:20.327899   25747 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 21:59:20.327925   25747 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 21:59:20.333872   25747 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 21:59:20.333898   25747 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 21:59:20.337459   25747 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 21:59:20.337486   25747 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 21:59:20.340944   25747 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 21:59:20.340960   25747 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 21:59:20.356542   25747 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 21:59:20.356561   25747 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 21:59:20.571829   25747 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 21:59:20.571854   25747 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0914 21:59:20.720005   25747 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 21:59:20.720030   25747 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0914 21:59:20.720995   25747 kubeadm.go:322] 
	I0914 21:59:20.721091   25747 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 21:59:20.721116   25747 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0914 21:59:20.721128   25747 kubeadm.go:322] 
	I0914 21:59:20.721189   25747 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 21:59:20.721199   25747 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0914 21:59:20.721207   25747 kubeadm.go:322] 
	I0914 21:59:20.721251   25747 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 21:59:20.721267   25747 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0914 21:59:20.721346   25747 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 21:59:20.721355   25747 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 21:59:20.721435   25747 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 21:59:20.721448   25747 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 21:59:20.721455   25747 kubeadm.go:322] 
	I0914 21:59:20.721530   25747 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 21:59:20.721542   25747 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0914 21:59:20.721548   25747 kubeadm.go:322] 
	I0914 21:59:20.721653   25747 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 21:59:20.721673   25747 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 21:59:20.721690   25747 kubeadm.go:322] 
	I0914 21:59:20.721755   25747 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 21:59:20.721766   25747 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0914 21:59:20.721863   25747 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 21:59:20.721878   25747 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 21:59:20.721980   25747 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 21:59:20.721991   25747 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 21:59:20.721997   25747 kubeadm.go:322] 
	I0914 21:59:20.722106   25747 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 21:59:20.722116   25747 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0914 21:59:20.722223   25747 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 21:59:20.722239   25747 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0914 21:59:20.722246   25747 kubeadm.go:322] 
	I0914 21:59:20.722361   25747 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m2ic5q.3jmhhvhcpl4eyssc \
	I0914 21:59:20.722370   25747 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token m2ic5q.3jmhhvhcpl4eyssc \
	I0914 21:59:20.722489   25747 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 21:59:20.722499   25747 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 21:59:20.722526   25747 kubeadm.go:322] 	--control-plane 
	I0914 21:59:20.722536   25747 command_runner.go:130] > 	--control-plane 
	I0914 21:59:20.722543   25747 kubeadm.go:322] 
	I0914 21:59:20.722666   25747 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 21:59:20.722676   25747 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0914 21:59:20.722682   25747 kubeadm.go:322] 
	I0914 21:59:20.722813   25747 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m2ic5q.3jmhhvhcpl4eyssc \
	I0914 21:59:20.722822   25747 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token m2ic5q.3jmhhvhcpl4eyssc \
	I0914 21:59:20.722939   25747 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 21:59:20.722959   25747 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 21:59:20.723129   25747 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 21:59:20.723133   25747 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 21:59:20.723157   25747 cni.go:84] Creating CNI manager for ""
	I0914 21:59:20.723173   25747 cni.go:136] 1 nodes found, recommending kindnet
	I0914 21:59:20.725997   25747 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 21:59:20.727399   25747 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 21:59:20.744609   25747 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0914 21:59:20.744632   25747 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0914 21:59:20.744643   25747 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0914 21:59:20.744653   25747 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 21:59:20.744665   25747 command_runner.go:130] > Access: 2023-09-14 21:58:49.372289482 +0000
	I0914 21:59:20.744676   25747 command_runner.go:130] > Modify: 2023-09-13 23:09:37.000000000 +0000
	I0914 21:59:20.744687   25747 command_runner.go:130] > Change: 2023-09-14 21:58:47.705289482 +0000
	I0914 21:59:20.744696   25747 command_runner.go:130] >  Birth: -
	I0914 21:59:20.745396   25747 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 21:59:20.745416   25747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 21:59:20.767462   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 21:59:21.728078   25747 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0914 21:59:21.739352   25747 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0914 21:59:21.759742   25747 command_runner.go:130] > serviceaccount/kindnet created
	I0914 21:59:21.787089   25747 command_runner.go:130] > daemonset.apps/kindnet created
	I0914 21:59:21.789515   25747 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.022015764s)
	I0914 21:59:21.789553   25747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 21:59:21.789619   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:21.789685   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=multinode-124911 minikube.k8s.io/updated_at=2023_09_14T21_59_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:21.964565   25747 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0914 21:59:21.968336   25747 command_runner.go:130] > -16
	I0914 21:59:21.968365   25747 ops.go:34] apiserver oom_adj: -16
	I0914 21:59:21.968510   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:21.979249   25747 command_runner.go:130] > node/multinode-124911 labeled
	I0914 21:59:22.053954   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:22.056234   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:22.132934   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:22.634877   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:22.715505   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:23.135772   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:23.213377   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:23.635573   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:23.715617   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:24.135021   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:24.213205   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:24.635523   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:24.718478   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:25.134991   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:25.210998   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:25.635203   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:25.713914   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:26.135566   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:26.214782   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:26.634940   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:26.713615   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:27.135077   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:27.216698   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:27.635589   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:27.715951   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:28.135505   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:28.230039   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:28.635705   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:28.731980   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:29.135607   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:29.226762   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:29.635516   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:29.712031   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:30.135398   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:30.228428   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:30.634947   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:30.752260   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:31.134794   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:31.233876   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:31.635509   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:31.723744   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:32.135563   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:32.227839   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:32.635012   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:32.732816   25747 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 21:59:33.135437   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 21:59:33.301656   25747 command_runner.go:130] > NAME      SECRETS   AGE
	I0914 21:59:33.301678   25747 command_runner.go:130] > default   0         1s
	I0914 21:59:33.301750   25747 kubeadm.go:1081] duration metric: took 11.512179845s to wait for elevateKubeSystemPrivileges.
	I0914 21:59:33.301781   25747 kubeadm.go:406] StartCluster complete in 24.375832893s
	I0914 21:59:33.301802   25747 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:59:33.301885   25747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:59:33.302516   25747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 21:59:33.302716   25747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 21:59:33.302802   25747 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 21:59:33.302880   25747 addons.go:69] Setting default-storageclass=true in profile "multinode-124911"
	I0914 21:59:33.302931   25747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-124911"
	I0914 21:59:33.302938   25747 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 21:59:33.302885   25747 addons.go:69] Setting storage-provisioner=true in profile "multinode-124911"
	I0914 21:59:33.302995   25747 addons.go:231] Setting addon storage-provisioner=true in "multinode-124911"
	I0914 21:59:33.303043   25747 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:59:33.303045   25747 host.go:66] Checking if "multinode-124911" exists ...
	I0914 21:59:33.303396   25747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:59:33.303430   25747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:59:33.303368   25747 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 21:59:33.303520   25747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:59:33.303552   25747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:59:33.304129   25747 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 21:59:33.304384   25747 round_trippers.go:463] GET https://192.168.39.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 21:59:33.304400   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:33.304412   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:33.304421   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:33.323351   25747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40491
	I0914 21:59:33.323679   25747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I0914 21:59:33.323888   25747 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:59:33.323982   25747 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:59:33.324436   25747 main.go:141] libmachine: Using API Version  1
	I0914 21:59:33.324456   25747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:59:33.324585   25747 main.go:141] libmachine: Using API Version  1
	I0914 21:59:33.324610   25747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:59:33.324802   25747 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:59:33.324968   25747 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:59:33.325148   25747 main.go:141] libmachine: (multinode-124911) Calling .GetState
	I0914 21:59:33.325319   25747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:59:33.325359   25747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:59:33.327120   25747 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:59:33.327422   25747 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 21:59:33.327813   25747 round_trippers.go:463] GET https://192.168.39.116:8443/apis/storage.k8s.io/v1/storageclasses
	I0914 21:59:33.327825   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:33.327835   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:33.327845   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:33.334469   25747 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 21:59:33.334492   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:33.334502   25747 round_trippers.go:580]     Audit-Id: 5dc5c9d4-76df-4281-8c6a-800a5c45d731
	I0914 21:59:33.334510   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:33.334517   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:33.334524   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:33.334532   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:33.334542   25747 round_trippers.go:580]     Content-Length: 109
	I0914 21:59:33.334550   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:33 GMT
	I0914 21:59:33.334589   25747 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"326"},"items":[]}
	I0914 21:59:33.334956   25747 addons.go:231] Setting addon default-storageclass=true in "multinode-124911"
	I0914 21:59:33.334999   25747 host.go:66] Checking if "multinode-124911" exists ...
	I0914 21:59:33.335387   25747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:59:33.335430   25747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:59:33.336765   25747 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0914 21:59:33.336786   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:33.336797   25747 round_trippers.go:580]     Audit-Id: a79b1278-3751-4f0b-a35b-0db3a5fbb693
	I0914 21:59:33.336807   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:33.336816   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:33.336827   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:33.336841   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:33.336857   25747 round_trippers.go:580]     Content-Length: 291
	I0914 21:59:33.336870   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:33 GMT
	I0914 21:59:33.336904   25747 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d40ee9-9834-4f82-84c2-51e3c14c181f","resourceVersion":"323","creationTimestamp":"2023-09-14T21:59:20Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0914 21:59:33.337284   25747 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d40ee9-9834-4f82-84c2-51e3c14c181f","resourceVersion":"323","creationTimestamp":"2023-09-14T21:59:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0914 21:59:33.337352   25747 round_trippers.go:463] PUT https://192.168.39.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 21:59:33.337366   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:33.337378   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:33.337392   25747 round_trippers.go:473]     Content-Type: application/json
	I0914 21:59:33.337406   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:33.340776   25747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
	I0914 21:59:33.341166   25747 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:59:33.341677   25747 main.go:141] libmachine: Using API Version  1
	I0914 21:59:33.341703   25747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:59:33.342037   25747 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:59:33.342247   25747 main.go:141] libmachine: (multinode-124911) Calling .GetState
	I0914 21:59:33.343793   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 21:59:33.345804   25747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 21:59:33.347599   25747 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 21:59:33.347614   25747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 21:59:33.347627   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:33.351029   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:33.351592   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:33.351619   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:33.351791   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:33.351980   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:33.352149   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:33.352303   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 21:59:33.352352   25747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0914 21:59:33.352680   25747 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:59:33.353151   25747 main.go:141] libmachine: Using API Version  1
	I0914 21:59:33.353183   25747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:59:33.353502   25747 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:59:33.353962   25747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:59:33.354002   25747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:59:33.368731   25747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44633
	I0914 21:59:33.369211   25747 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:59:33.369681   25747 main.go:141] libmachine: Using API Version  1
	I0914 21:59:33.369708   25747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:59:33.370067   25747 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:59:33.370260   25747 main.go:141] libmachine: (multinode-124911) Calling .GetState
	I0914 21:59:33.371945   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 21:59:33.372206   25747 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 21:59:33.372232   25747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 21:59:33.372251   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 21:59:33.374397   25747 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0914 21:59:33.374423   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:33.374433   25747 round_trippers.go:580]     Content-Length: 291
	I0914 21:59:33.374446   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:33 GMT
	I0914 21:59:33.374457   25747 round_trippers.go:580]     Audit-Id: d2b81305-7897-4ee9-b84a-36251d5d8269
	I0914 21:59:33.374470   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:33.374477   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:33.374488   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:33.374500   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:33.375111   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:33.375456   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 21:59:33.375499   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 21:59:33.375632   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 21:59:33.375805   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 21:59:33.375971   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 21:59:33.376100   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 21:59:33.384520   25747 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d40ee9-9834-4f82-84c2-51e3c14c181f","resourceVersion":"331","creationTimestamp":"2023-09-14T21:59:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0914 21:59:33.384672   25747 round_trippers.go:463] GET https://192.168.39.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 21:59:33.384683   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:33.384691   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:33.384697   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:33.410445   25747 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0914 21:59:33.410484   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:33.410494   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:33.410502   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:33.410510   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:33.410518   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:33.410525   25747 round_trippers.go:580]     Content-Length: 291
	I0914 21:59:33.410534   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:33 GMT
	I0914 21:59:33.410541   25747 round_trippers.go:580]     Audit-Id: 12755c76-89bb-41a4-9d26-77ba7951597f
	I0914 21:59:33.411325   25747 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d40ee9-9834-4f82-84c2-51e3c14c181f","resourceVersion":"331","creationTimestamp":"2023-09-14T21:59:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0914 21:59:33.411421   25747 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-124911" context rescaled to 1 replicas
	I0914 21:59:33.411449   25747 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 21:59:33.413948   25747 out.go:177] * Verifying Kubernetes components...
	I0914 21:59:33.415310   25747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 21:59:33.500868   25747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 21:59:33.540954   25747 command_runner.go:130] > apiVersion: v1
	I0914 21:59:33.540980   25747 command_runner.go:130] > data:
	I0914 21:59:33.540988   25747 command_runner.go:130] >   Corefile: |
	I0914 21:59:33.540994   25747 command_runner.go:130] >     .:53 {
	I0914 21:59:33.541000   25747 command_runner.go:130] >         errors
	I0914 21:59:33.541008   25747 command_runner.go:130] >         health {
	I0914 21:59:33.541016   25747 command_runner.go:130] >            lameduck 5s
	I0914 21:59:33.541022   25747 command_runner.go:130] >         }
	I0914 21:59:33.541029   25747 command_runner.go:130] >         ready
	I0914 21:59:33.541039   25747 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0914 21:59:33.541046   25747 command_runner.go:130] >            pods insecure
	I0914 21:59:33.541055   25747 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0914 21:59:33.541064   25747 command_runner.go:130] >            ttl 30
	I0914 21:59:33.541073   25747 command_runner.go:130] >         }
	I0914 21:59:33.541080   25747 command_runner.go:130] >         prometheus :9153
	I0914 21:59:33.541088   25747 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0914 21:59:33.541102   25747 command_runner.go:130] >            max_concurrent 1000
	I0914 21:59:33.541110   25747 command_runner.go:130] >         }
	I0914 21:59:33.541117   25747 command_runner.go:130] >         cache 30
	I0914 21:59:33.541127   25747 command_runner.go:130] >         loop
	I0914 21:59:33.541138   25747 command_runner.go:130] >         reload
	I0914 21:59:33.541146   25747 command_runner.go:130] >         loadbalance
	I0914 21:59:33.541153   25747 command_runner.go:130] >     }
	I0914 21:59:33.541162   25747 command_runner.go:130] > kind: ConfigMap
	I0914 21:59:33.541169   25747 command_runner.go:130] > metadata:
	I0914 21:59:33.541180   25747 command_runner.go:130] >   creationTimestamp: "2023-09-14T21:59:20Z"
	I0914 21:59:33.541194   25747 command_runner.go:130] >   name: coredns
	I0914 21:59:33.541200   25747 command_runner.go:130] >   namespace: kube-system
	I0914 21:59:33.541207   25747 command_runner.go:130] >   resourceVersion: "223"
	I0914 21:59:33.541218   25747 command_runner.go:130] >   uid: a21783c3-59aa-4441-b3d2-929766f52988
	I0914 21:59:33.543630   25747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 21:59:33.543839   25747 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:59:33.544137   25747 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 21:59:33.544438   25747 node_ready.go:35] waiting up to 6m0s for node "multinode-124911" to be "Ready" ...
	I0914 21:59:33.544524   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:33.544536   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:33.544547   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:33.544561   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:33.571348   25747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 21:59:33.596518   25747 round_trippers.go:574] Response Status: 200 OK in 51 milliseconds
	I0914 21:59:33.596546   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:33.596556   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:33.596565   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:33.596573   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:33 GMT
	I0914 21:59:33.596581   25747 round_trippers.go:580]     Audit-Id: e5af8592-8941-498a-8424-0c18e5f77c59
	I0914 21:59:33.596588   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:33.596596   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:33.596739   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:33.597529   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:33.597548   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:33.597560   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:33.597573   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:33.703023   25747 round_trippers.go:574] Response Status: 200 OK in 105 milliseconds
	I0914 21:59:33.703052   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:33.703063   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:33.703076   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:33 GMT
	I0914 21:59:33.703084   25747 round_trippers.go:580]     Audit-Id: e3d40539-5f5f-4288-a9f6-e4c4e293e100
	I0914 21:59:33.703092   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:33.703098   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:33.703103   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:33.706362   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:34.207562   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:34.207590   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:34.207603   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:34.207613   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:34.219710   25747 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0914 21:59:34.219742   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:34.219751   25747 round_trippers.go:580]     Audit-Id: 51522614-bbc6-4720-99e7-50d1c61115f4
	I0914 21:59:34.219757   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:34.219762   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:34.219768   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:34.219773   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:34.219778   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:34 GMT
	I0914 21:59:34.220879   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:34.413929   25747 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0914 21:59:34.424521   25747 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0914 21:59:34.432838   25747 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0914 21:59:34.441613   25747 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0914 21:59:34.450810   25747 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0914 21:59:34.465713   25747 command_runner.go:130] > pod/storage-provisioner created
	I0914 21:59:34.468254   25747 main.go:141] libmachine: Making call to close driver server
	I0914 21:59:34.468280   25747 main.go:141] libmachine: (multinode-124911) Calling .Close
	I0914 21:59:34.468286   25747 command_runner.go:130] > configmap/coredns replaced
	I0914 21:59:34.468327   25747 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0914 21:59:34.468377   25747 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0914 21:59:34.468406   25747 main.go:141] libmachine: Making call to close driver server
	I0914 21:59:34.468416   25747 main.go:141] libmachine: (multinode-124911) Calling .Close
	I0914 21:59:34.468575   25747 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:59:34.468629   25747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:59:34.468642   25747 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:59:34.468661   25747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:59:34.468672   25747 main.go:141] libmachine: Making call to close driver server
	I0914 21:59:34.468676   25747 main.go:141] libmachine: Making call to close driver server
	I0914 21:59:34.468682   25747 main.go:141] libmachine: (multinode-124911) Calling .Close
	I0914 21:59:34.468686   25747 main.go:141] libmachine: (multinode-124911) Calling .Close
	I0914 21:59:34.468593   25747 main.go:141] libmachine: (multinode-124911) DBG | Closing plugin on server side
	I0914 21:59:34.468632   25747 main.go:141] libmachine: (multinode-124911) DBG | Closing plugin on server side
	I0914 21:59:34.468925   25747 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:59:34.468960   25747 main.go:141] libmachine: (multinode-124911) DBG | Closing plugin on server side
	I0914 21:59:34.468973   25747 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:59:34.468987   25747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:59:34.469001   25747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:59:34.469043   25747 main.go:141] libmachine: Making call to close driver server
	I0914 21:59:34.469066   25747 main.go:141] libmachine: (multinode-124911) Calling .Close
	I0914 21:59:34.469065   25747 main.go:141] libmachine: (multinode-124911) DBG | Closing plugin on server side
	I0914 21:59:34.469327   25747 main.go:141] libmachine: (multinode-124911) DBG | Closing plugin on server side
	I0914 21:59:34.469381   25747 main.go:141] libmachine: Successfully made call to close driver server
	I0914 21:59:34.469397   25747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 21:59:34.471350   25747 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 21:59:34.472865   25747 addons.go:502] enable addons completed in 1.170064177s: enabled=[storage-provisioner default-storageclass]
	I0914 21:59:34.706893   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:34.706920   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:34.706931   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:34.706939   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:34.709565   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:34.709600   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:34.709609   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:34 GMT
	I0914 21:59:34.709617   25747 round_trippers.go:580]     Audit-Id: 5f91ff9c-7bca-4f6b-be55-748e21c44157
	I0914 21:59:34.709626   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:34.709634   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:34.709645   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:34.709658   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:34.709821   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:35.207335   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:35.207359   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:35.207367   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:35.207374   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:35.210126   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:35.210150   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:35.210172   25747 round_trippers.go:580]     Audit-Id: d79ec0bd-cd08-4234-8bc6-2b7f6ecf759e
	I0914 21:59:35.210185   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:35.210194   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:35.210203   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:35.210211   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:35.210219   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:35 GMT
	I0914 21:59:35.210400   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:35.706968   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:35.707005   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:35.707013   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:35.707019   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:35.710122   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 21:59:35.710151   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:35.710160   25747 round_trippers.go:580]     Audit-Id: 3d70e150-d1cf-4a64-82a6-47704728d506
	I0914 21:59:35.710168   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:35.710176   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:35.710184   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:35.710192   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:35.710200   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:35 GMT
	I0914 21:59:35.710363   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:35.710718   25747 node_ready.go:58] node "multinode-124911" has status "Ready":"False"
	I0914 21:59:36.207598   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:36.207635   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:36.207643   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:36.207652   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:36.210400   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:36.210422   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:36.210430   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:36.210438   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:36.210446   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:36.210470   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:36 GMT
	I0914 21:59:36.210485   25747 round_trippers.go:580]     Audit-Id: a335c89e-d0be-40c4-b259-3023f2b74094
	I0914 21:59:36.210492   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:36.210580   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:36.707198   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:36.707221   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:36.707229   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:36.707235   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:36.710083   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:36.710104   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:36.710111   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:36 GMT
	I0914 21:59:36.710116   25747 round_trippers.go:580]     Audit-Id: 4891b585-4d4f-4483-85aa-baa7be83a4da
	I0914 21:59:36.710121   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:36.710126   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:36.710131   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:36.710136   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:36.710300   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:37.207839   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:37.207864   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:37.207872   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:37.207878   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:37.210578   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:37.210600   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:37.210608   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:37 GMT
	I0914 21:59:37.210614   25747 round_trippers.go:580]     Audit-Id: 0c2e9554-ec9c-42da-8607-3060576bada9
	I0914 21:59:37.210619   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:37.210624   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:37.210629   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:37.210635   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:37.210813   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:37.707344   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:37.707362   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:37.707370   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:37.707376   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:37.710158   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:37.710180   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:37.710189   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:37.710197   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:37.710205   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:37.710228   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:37 GMT
	I0914 21:59:37.710241   25747 round_trippers.go:580]     Audit-Id: 83b06459-2ebc-4d13-bfb4-5609447b4a7e
	I0914 21:59:37.710254   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:37.710373   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:37.710811   25747 node_ready.go:58] node "multinode-124911" has status "Ready":"False"
	I0914 21:59:38.206989   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:38.207015   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:38.207023   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:38.207030   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:38.209606   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:38.209629   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:38.209637   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:38 GMT
	I0914 21:59:38.209645   25747 round_trippers.go:580]     Audit-Id: 515cdce3-8562-4cbd-a7f9-313fdada0b99
	I0914 21:59:38.209655   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:38.209663   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:38.209670   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:38.209678   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:38.209981   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:38.707420   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:38.707445   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:38.707453   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:38.707459   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:38.710098   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:38.710119   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:38.710130   25747 round_trippers.go:580]     Audit-Id: 4a3e8665-2913-4ea4-8fdc-3acd0561be2a
	I0914 21:59:38.710138   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:38.710146   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:38.710153   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:38.710168   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:38.710181   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:38 GMT
	I0914 21:59:38.710362   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:39.206920   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:39.206952   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:39.206961   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:39.206967   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:39.209948   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:39.209967   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:39.209975   25747 round_trippers.go:580]     Audit-Id: dfe25801-87cb-46c5-abff-880f4ada8f23
	I0914 21:59:39.209981   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:39.209986   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:39.209992   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:39.209997   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:39.210002   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:39 GMT
	I0914 21:59:39.210172   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:39.707892   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:39.707923   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:39.707931   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:39.707937   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:39.711334   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 21:59:39.711351   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:39.711358   25747 round_trippers.go:580]     Audit-Id: f28cd02e-2b17-4874-bab8-8cea8b69dc32
	I0914 21:59:39.711363   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:39.711368   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:39.711373   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:39.711378   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:39.711383   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:39 GMT
	I0914 21:59:39.711706   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:39.712046   25747 node_ready.go:58] node "multinode-124911" has status "Ready":"False"
	I0914 21:59:40.207363   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:40.207387   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:40.207398   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:40.207409   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:40.210338   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:40.210362   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:40.210372   25747 round_trippers.go:580]     Audit-Id: f1b5bf17-b076-4a43-b94e-059065129259
	I0914 21:59:40.210380   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:40.210388   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:40.210396   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:40.210404   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:40.210413   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:40 GMT
	I0914 21:59:40.210911   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:40.707167   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:40.707190   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:40.707201   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:40.707209   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:40.710327   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 21:59:40.710348   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:40.710356   25747 round_trippers.go:580]     Audit-Id: ada48e7d-5f70-400a-ba0b-4342d88e4bb5
	I0914 21:59:40.710363   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:40.710371   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:40.710379   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:40.710388   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:40.710398   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:40 GMT
	I0914 21:59:40.710540   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:41.207173   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:41.207198   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:41.207208   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:41.207216   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:41.209451   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:41.209473   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:41.209482   25747 round_trippers.go:580]     Audit-Id: 5988b712-5325-407a-b251-139c3c18a84c
	I0914 21:59:41.209491   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:41.209496   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:41.209502   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:41.209510   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:41.209519   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:41 GMT
	I0914 21:59:41.209912   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"336","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0914 21:59:41.707626   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:41.707650   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:41.707660   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:41.707669   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:41.712065   25747 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 21:59:41.712089   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:41.712096   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:41.712101   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:41.712107   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:41.712112   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:41.712116   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:41 GMT
	I0914 21:59:41.712121   25747 round_trippers.go:580]     Audit-Id: 39d4f268-3b05-4db6-a680-dacb347e1481
	I0914 21:59:41.712597   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:41.712961   25747 node_ready.go:49] node "multinode-124911" has status "Ready":"True"
	I0914 21:59:41.712976   25747 node_ready.go:38] duration metric: took 8.168518279s waiting for node "multinode-124911" to be "Ready" ...
	I0914 21:59:41.712984   25747 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 21:59:41.713043   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 21:59:41.713056   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:41.713064   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:41.713070   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:41.720477   25747 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 21:59:41.720500   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:41.720510   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:41.720519   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:41 GMT
	I0914 21:59:41.720529   25747 round_trippers.go:580]     Audit-Id: 0612b521-cfc6-4281-9af8-44f3cb20cdfc
	I0914 21:59:41.720538   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:41.720547   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:41.720559   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:41.721127   25747 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"395"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"395","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52377 chars]
	I0914 21:59:41.724055   25747 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:41.724116   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 21:59:41.724124   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:41.724131   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:41.724137   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:41.732399   25747 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0914 21:59:41.732419   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:41.732427   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:41.732444   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:41 GMT
	I0914 21:59:41.732452   25747 round_trippers.go:580]     Audit-Id: 4ec048fd-c062-4806-93bc-bab2cd672fb5
	I0914 21:59:41.732459   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:41.732468   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:41.732476   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:41.732630   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"395","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0914 21:59:41.733182   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:41.733206   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:41.733218   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:41.733228   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:41.734939   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 21:59:41.734958   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:41.734967   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:41 GMT
	I0914 21:59:41.734977   25747 round_trippers.go:580]     Audit-Id: d2d910b6-0461-4aeb-bd27-085a929c9043
	I0914 21:59:41.734986   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:41.734994   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:41.735003   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:41.735011   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:41.735162   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:41.735603   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 21:59:41.735621   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:41.735633   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:41.735643   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:41.737916   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:41.737934   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:41.737944   25747 round_trippers.go:580]     Audit-Id: b6f2aa9c-28f7-4ddc-8bd7-98c7363a4f9a
	I0914 21:59:41.737952   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:41.737961   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:41.737973   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:41.737981   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:41.737990   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:41 GMT
	I0914 21:59:41.738327   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"395","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0914 21:59:41.738702   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:41.738710   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:41.738717   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:41.738722   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:41.740664   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 21:59:41.740682   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:41.740692   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:41.740701   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:41.740710   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:41.740718   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:41 GMT
	I0914 21:59:41.740732   25747 round_trippers.go:580]     Audit-Id: 44ea25d4-f222-4ef1-bfe9-61d944c38740
	I0914 21:59:41.740745   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:41.740880   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:42.241740   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 21:59:42.241763   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:42.241771   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:42.241778   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:42.244310   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:42.244329   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:42.244335   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:42.244340   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:42.244345   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:42 GMT
	I0914 21:59:42.244350   25747 round_trippers.go:580]     Audit-Id: 4eefd682-f506-44bc-b4ae-a21c8f8f6e27
	I0914 21:59:42.244355   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:42.244360   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:42.244578   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"395","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0914 21:59:42.245004   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:42.245016   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:42.245023   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:42.245030   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:42.247029   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 21:59:42.247046   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:42.247052   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:42.247057   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:42.247063   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:42.247068   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:42 GMT
	I0914 21:59:42.247073   25747 round_trippers.go:580]     Audit-Id: 4c6a94c2-f072-46a5-af6b-7d744837de21
	I0914 21:59:42.247078   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:42.247232   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:42.741471   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 21:59:42.741494   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:42.741502   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:42.741508   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:42.744147   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:42.744167   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:42.744177   25747 round_trippers.go:580]     Audit-Id: 9669d601-2014-4a56-a3d5-026a6629f1dd
	I0914 21:59:42.744184   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:42.744191   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:42.744198   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:42.744206   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:42.744214   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:42 GMT
	I0914 21:59:42.744349   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"395","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0914 21:59:42.744836   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:42.744857   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:42.744868   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:42.744877   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:42.748656   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 21:59:42.748679   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:42.748689   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:42 GMT
	I0914 21:59:42.748698   25747 round_trippers.go:580]     Audit-Id: 1237646f-6a2a-41b4-b5a3-5a50ae0a1c39
	I0914 21:59:42.748705   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:42.748714   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:42.748723   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:42.748731   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:42.748890   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:43.241487   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 21:59:43.241510   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:43.241518   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:43.241524   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:43.244410   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:43.244430   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:43.244439   25747 round_trippers.go:580]     Audit-Id: d27b1670-01ab-47e4-a227-c76408aa5327
	I0914 21:59:43.244447   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:43.244455   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:43.244462   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:43.244469   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:43.244477   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:43 GMT
	I0914 21:59:43.244672   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"409","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6494 chars]
	I0914 21:59:43.245222   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:43.245237   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:43.245256   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:43.245262   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:43.247920   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:43.247938   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:43.247946   25747 round_trippers.go:580]     Audit-Id: 0494ab59-d9ad-4c3f-853c-7bc8accc6d32
	I0914 21:59:43.247954   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:43.247963   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:43.247970   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:43.247977   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:43.247986   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:43 GMT
	I0914 21:59:43.248299   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:43.741642   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 21:59:43.741671   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:43.741682   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:43.741690   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:43.744672   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:43.744689   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:43.744696   25747 round_trippers.go:580]     Audit-Id: 7704deeb-c248-4619-9807-bd078284644b
	I0914 21:59:43.744701   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:43.744712   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:43.744720   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:43.744730   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:43.744742   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:43 GMT
	I0914 21:59:43.745106   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"409","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6494 chars]
	I0914 21:59:43.745579   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:43.745593   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:43.745600   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:43.745610   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:43.747886   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:43.747905   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:43.747915   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:43.747927   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:43 GMT
	I0914 21:59:43.747940   25747 round_trippers.go:580]     Audit-Id: 3dc6a704-0073-4738-8103-283b304c166d
	I0914 21:59:43.747949   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:43.747960   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:43.747968   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:43.748109   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:43.748426   25747 pod_ready.go:102] pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace has status "Ready":"False"
	I0914 21:59:44.241797   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 21:59:44.241823   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.241834   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.241843   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.244838   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:44.244864   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.244875   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.244885   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.244895   25747 round_trippers.go:580]     Audit-Id: ef261eb1-f912-4c30-8fba-f78aa037f6c6
	I0914 21:59:44.244904   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.244915   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.244927   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.245141   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"412","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0914 21:59:44.245648   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:44.245662   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.245669   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.245675   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.247887   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:44.247904   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.247914   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.247923   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.247939   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.247951   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.247960   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.247972   25747 round_trippers.go:580]     Audit-Id: a52a8614-5010-4724-b69f-5e88543936d1
	I0914 21:59:44.248095   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:44.248485   25747 pod_ready.go:92] pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace has status "Ready":"True"
	I0914 21:59:44.248502   25747 pod_ready.go:81] duration metric: took 2.524427546s waiting for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.248511   25747 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.248557   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-124911
	I0914 21:59:44.248564   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.248570   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.248579   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.250344   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 21:59:44.250358   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.250366   25747 round_trippers.go:580]     Audit-Id: 8524f3f6-a163-45a1-b207-013a141b6cfd
	I0914 21:59:44.250374   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.250383   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.250395   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.250404   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.250414   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.250985   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-124911","namespace":"kube-system","uid":"1b195f1a-48a6-4b46-a819-2aeb9fe4e00c","resourceVersion":"382","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.116:2379","kubernetes.io/config.hash":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.mirror":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.seen":"2023-09-14T21:59:20.641783376Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0914 21:59:44.251303   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:44.251317   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.251327   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.251336   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.253068   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 21:59:44.253087   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.253096   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.253104   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.253117   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.253125   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.253137   25747 round_trippers.go:580]     Audit-Id: 3a6e04a4-7ad8-4dd3-a2ef-bdedbad9090a
	I0914 21:59:44.253145   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.253236   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:44.253588   25747 pod_ready.go:92] pod "etcd-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 21:59:44.253607   25747 pod_ready.go:81] duration metric: took 5.08664ms waiting for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.253623   25747 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.253682   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124911
	I0914 21:59:44.253692   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.253701   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.253714   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.255316   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 21:59:44.255330   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.255339   25747 round_trippers.go:580]     Audit-Id: 89c9337a-1f0a-4797-870a-efef1b4f0273
	I0914 21:59:44.255348   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.255360   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.255373   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.255389   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.255401   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.255681   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-124911","namespace":"kube-system","uid":"e9a93d33-82f3-4cfe-9b2c-92560dd09d09","resourceVersion":"383","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.116:8443","kubernetes.io/config.hash":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.mirror":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.seen":"2023-09-14T21:59:20.641778793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0914 21:59:44.256019   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:44.256031   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.256041   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.256049   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.258295   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:44.258307   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.258316   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.258324   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.258333   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.258346   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.258358   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.258371   25747 round_trippers.go:580]     Audit-Id: c7e5c4ab-15bb-4e04-943c-6117ac9ba7e0
	I0914 21:59:44.258860   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:44.259213   25747 pod_ready.go:92] pod "kube-apiserver-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 21:59:44.259237   25747 pod_ready.go:81] duration metric: took 5.601214ms waiting for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.259252   25747 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.259302   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124911
	I0914 21:59:44.259313   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.259323   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.259335   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.261101   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 21:59:44.261115   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.261124   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.261132   25747 round_trippers.go:580]     Audit-Id: ae53c1fa-b613-4d07-ba70-a4c3726dd9bc
	I0914 21:59:44.261141   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.261150   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.261160   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.261173   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.261316   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-124911","namespace":"kube-system","uid":"3efae123-9cdd-457a-a317-77370a6c5288","resourceVersion":"384","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.mirror":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.seen":"2023-09-14T21:59:20.641781682Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0914 21:59:44.261811   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:44.261828   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.261838   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.261848   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.264554   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:44.264568   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.264577   25747 round_trippers.go:580]     Audit-Id: 887d25f8-f80c-416b-98e0-60d5a47ed7eb
	I0914 21:59:44.264585   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.264593   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.264604   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.264617   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.264631   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.264805   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:44.265139   25747 pod_ready.go:92] pod "kube-controller-manager-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 21:59:44.265156   25747 pod_ready.go:81] duration metric: took 5.895129ms waiting for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.265167   25747 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.308386   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kd4p
	I0914 21:59:44.308409   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.308418   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.308427   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.311672   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 21:59:44.311689   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.311695   25747 round_trippers.go:580]     Audit-Id: 918238ec-0ada-4547-aaf7-6bdaf73a6fef
	I0914 21:59:44.311701   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.311706   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.311719   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.311725   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.311732   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.311838   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2kd4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"de9e2ee3-364a-447b-9d7f-be85d86838ae","resourceVersion":"375","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0914 21:59:44.508669   25747 request.go:629] Waited for 196.458638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:44.508752   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:44.508759   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.508766   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.508775   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.512042   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 21:59:44.512058   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.512067   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.512075   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.512083   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.512092   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.512105   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.512114   25747 round_trippers.go:580]     Audit-Id: 0489ecab-35fb-453d-9e16-69498c6972e5
	I0914 21:59:44.512627   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:44.512912   25747 pod_ready.go:92] pod "kube-proxy-2kd4p" in "kube-system" namespace has status "Ready":"True"
	I0914 21:59:44.512938   25747 pod_ready.go:81] duration metric: took 247.75308ms waiting for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.512952   25747 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.708432   25747 request.go:629] Waited for 195.410366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 21:59:44.708498   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 21:59:44.708512   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.708522   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.708535   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.711033   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:44.711054   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.711063   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.711071   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.711079   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.711088   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.711098   25747 round_trippers.go:580]     Audit-Id: 1d15e4da-0949-4849-aff3-a8bfaaa22621
	I0914 21:59:44.711109   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.711276   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-124911","namespace":"kube-system","uid":"f8d502b7-9ee7-474e-ab64-9f721ee6970e","resourceVersion":"360","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.mirror":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.seen":"2023-09-14T21:59:20.641782607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0914 21:59:44.908027   25747 request.go:629] Waited for 196.344519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:44.908105   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 21:59:44.908114   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.908129   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.908140   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.910747   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:44.910770   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.910779   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.910787   25747 round_trippers.go:580]     Audit-Id: 1823cbf4-2a83-40e4-a36f-7c0df6d0b4f0
	I0914 21:59:44.910794   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.910801   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.910813   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.910820   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.910970   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0914 21:59:44.911341   25747 pod_ready.go:92] pod "kube-scheduler-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 21:59:44.911421   25747 pod_ready.go:81] duration metric: took 398.451124ms waiting for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 21:59:44.911452   25747 pod_ready.go:38] duration metric: took 3.198454813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 21:59:44.911489   25747 api_server.go:52] waiting for apiserver process to appear ...
	I0914 21:59:44.911543   25747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 21:59:44.924959   25747 command_runner.go:130] > 1097
	I0914 21:59:44.925076   25747 api_server.go:72] duration metric: took 11.513584338s to wait for apiserver process to appear ...
	I0914 21:59:44.925096   25747 api_server.go:88] waiting for apiserver healthz status ...
	I0914 21:59:44.925114   25747 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0914 21:59:44.931179   25747 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I0914 21:59:44.931238   25747 round_trippers.go:463] GET https://192.168.39.116:8443/version
	I0914 21:59:44.931247   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:44.931254   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:44.931260   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:44.932337   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 21:59:44.932353   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:44.932365   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:44.932373   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:44.932380   25747 round_trippers.go:580]     Content-Length: 263
	I0914 21:59:44.932392   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:44 GMT
	I0914 21:59:44.932408   25747 round_trippers.go:580]     Audit-Id: 0daeec9d-e516-4662-a777-e1fa2f0c37b9
	I0914 21:59:44.932417   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:44.932426   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:44.932524   25747 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0914 21:59:44.932631   25747 api_server.go:141] control plane version: v1.28.1
	I0914 21:59:44.932653   25747 api_server.go:131] duration metric: took 7.550252ms to wait for apiserver health ...
	I0914 21:59:44.932664   25747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 21:59:45.108068   25747 request.go:629] Waited for 175.33294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 21:59:45.108146   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 21:59:45.108154   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:45.108164   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:45.108178   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:45.111763   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 21:59:45.111787   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:45.111797   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:45.111806   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:45.111814   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:45.111826   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:45 GMT
	I0914 21:59:45.111838   25747 round_trippers.go:580]     Audit-Id: 5bc7d54d-a77e-4776-a67d-69db836e1f47
	I0914 21:59:45.111846   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:45.112722   25747 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"412","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I0914 21:59:45.114501   25747 system_pods.go:59] 8 kube-system pods found
	I0914 21:59:45.114529   25747 system_pods.go:61] "coredns-5dd5756b68-ssj9q" [aadacae8-9f4d-4c24-91c7-78a88d187f73] Running
	I0914 21:59:45.114543   25747 system_pods.go:61] "etcd-multinode-124911" [1b195f1a-48a6-4b46-a819-2aeb9fe4e00c] Running
	I0914 21:59:45.114549   25747 system_pods.go:61] "kindnet-274xj" [6d12f7c0-2ad9-436f-ab5d-528c4823865c] Running
	I0914 21:59:45.114559   25747 system_pods.go:61] "kube-apiserver-multinode-124911" [e9a93d33-82f3-4cfe-9b2c-92560dd09d09] Running
	I0914 21:59:45.114569   25747 system_pods.go:61] "kube-controller-manager-multinode-124911" [3efae123-9cdd-457a-a317-77370a6c5288] Running
	I0914 21:59:45.114578   25747 system_pods.go:61] "kube-proxy-2kd4p" [de9e2ee3-364a-447b-9d7f-be85d86838ae] Running
	I0914 21:59:45.114585   25747 system_pods.go:61] "kube-scheduler-multinode-124911" [f8d502b7-9ee7-474e-ab64-9f721ee6970e] Running
	I0914 21:59:45.114590   25747 system_pods.go:61] "storage-provisioner" [aada9d30-e15d-4405-a7e2-e979dd4b8e0d] Running
	I0914 21:59:45.114596   25747 system_pods.go:74] duration metric: took 181.924024ms to wait for pod list to return data ...
	I0914 21:59:45.114606   25747 default_sa.go:34] waiting for default service account to be created ...
	I0914 21:59:45.308039   25747 request.go:629] Waited for 193.361563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0914 21:59:45.308104   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0914 21:59:45.308111   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:45.308121   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:45.308129   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:45.312845   25747 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 21:59:45.312862   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:45.312871   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:45.312879   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:45.312887   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:45.312897   25747 round_trippers.go:580]     Content-Length: 261
	I0914 21:59:45.312902   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:45 GMT
	I0914 21:59:45.312907   25747 round_trippers.go:580]     Audit-Id: 70d5523e-80a5-45bb-8de1-326ec03364cb
	I0914 21:59:45.312913   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:45.312938   25747 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2fad9e6d-ab87-4f3f-9379-cd375b431267","resourceVersion":"303","creationTimestamp":"2023-09-14T21:59:32Z"}}]}
	I0914 21:59:45.313150   25747 default_sa.go:45] found service account: "default"
	I0914 21:59:45.313172   25747 default_sa.go:55] duration metric: took 198.559437ms for default service account to be created ...
	I0914 21:59:45.313181   25747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 21:59:45.508714   25747 request.go:629] Waited for 195.454224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 21:59:45.508784   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 21:59:45.508792   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:45.508803   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:45.508813   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:45.512559   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 21:59:45.512580   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:45.512588   25747 round_trippers.go:580]     Audit-Id: 60e8fd90-8276-4455-8864-40443b48c114
	I0914 21:59:45.512595   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:45.512602   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:45.512611   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:45.512619   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:45.512629   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:45 GMT
	I0914 21:59:45.513400   25747 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"412","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I0914 21:59:45.515718   25747 system_pods.go:86] 8 kube-system pods found
	I0914 21:59:45.515741   25747 system_pods.go:89] "coredns-5dd5756b68-ssj9q" [aadacae8-9f4d-4c24-91c7-78a88d187f73] Running
	I0914 21:59:45.515749   25747 system_pods.go:89] "etcd-multinode-124911" [1b195f1a-48a6-4b46-a819-2aeb9fe4e00c] Running
	I0914 21:59:45.515757   25747 system_pods.go:89] "kindnet-274xj" [6d12f7c0-2ad9-436f-ab5d-528c4823865c] Running
	I0914 21:59:45.515765   25747 system_pods.go:89] "kube-apiserver-multinode-124911" [e9a93d33-82f3-4cfe-9b2c-92560dd09d09] Running
	I0914 21:59:45.515776   25747 system_pods.go:89] "kube-controller-manager-multinode-124911" [3efae123-9cdd-457a-a317-77370a6c5288] Running
	I0914 21:59:45.515780   25747 system_pods.go:89] "kube-proxy-2kd4p" [de9e2ee3-364a-447b-9d7f-be85d86838ae] Running
	I0914 21:59:45.515788   25747 system_pods.go:89] "kube-scheduler-multinode-124911" [f8d502b7-9ee7-474e-ab64-9f721ee6970e] Running
	I0914 21:59:45.515798   25747 system_pods.go:89] "storage-provisioner" [aada9d30-e15d-4405-a7e2-e979dd4b8e0d] Running
	I0914 21:59:45.515810   25747 system_pods.go:126] duration metric: took 202.623754ms to wait for k8s-apps to be running ...
	I0914 21:59:45.515822   25747 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 21:59:45.515870   25747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 21:59:45.530031   25747 system_svc.go:56] duration metric: took 14.202811ms WaitForService to wait for kubelet.
	I0914 21:59:45.530050   25747 kubeadm.go:581] duration metric: took 12.118560656s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 21:59:45.530065   25747 node_conditions.go:102] verifying NodePressure condition ...
	I0914 21:59:45.708456   25747 request.go:629] Waited for 178.333691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes
	I0914 21:59:45.708520   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes
	I0914 21:59:45.708525   25747 round_trippers.go:469] Request Headers:
	I0914 21:59:45.708532   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 21:59:45.708538   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 21:59:45.711174   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 21:59:45.711199   25747 round_trippers.go:577] Response Headers:
	I0914 21:59:45.711210   25747 round_trippers.go:580]     Audit-Id: 7f478d7b-9e02-4a8d-a829-6c1417ce4042
	I0914 21:59:45.711217   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 21:59:45.711222   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 21:59:45.711227   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 21:59:45.711232   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 21:59:45.711241   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 21:59:45 GMT
	I0914 21:59:45.711731   25747 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"390","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0914 21:59:45.712084   25747 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 21:59:45.712103   25747 node_conditions.go:123] node cpu capacity is 2
	I0914 21:59:45.712115   25747 node_conditions.go:105] duration metric: took 182.046518ms to run NodePressure ...
	I0914 21:59:45.712125   25747 start.go:228] waiting for startup goroutines ...
	I0914 21:59:45.712134   25747 start.go:233] waiting for cluster config update ...
	I0914 21:59:45.712143   25747 start.go:242] writing updated cluster config ...
	I0914 21:59:45.714466   25747 out.go:177] 
	I0914 21:59:45.715984   25747 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 21:59:45.716069   25747 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 21:59:45.717940   25747 out.go:177] * Starting worker node multinode-124911-m02 in cluster multinode-124911
	I0914 21:59:45.719310   25747 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 21:59:45.719329   25747 cache.go:57] Caching tarball of preloaded images
	I0914 21:59:45.719405   25747 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 21:59:45.719415   25747 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 21:59:45.719485   25747 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 21:59:45.719643   25747 start.go:365] acquiring machines lock for multinode-124911-m02: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 21:59:45.719682   25747 start.go:369] acquired machines lock for "multinode-124911-m02" in 20.736µs
	I0914 21:59:45.719698   25747 start.go:93] Provisioning new machine with config: &{Name:multinode-124911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0914 21:59:45.719753   25747 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0914 21:59:45.721318   25747 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 21:59:45.721393   25747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:59:45.721435   25747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:59:45.735247   25747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I0914 21:59:45.735650   25747 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:59:45.736180   25747 main.go:141] libmachine: Using API Version  1
	I0914 21:59:45.736201   25747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:59:45.736532   25747 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:59:45.736750   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetMachineName
	I0914 21:59:45.736904   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 21:59:45.737109   25747 start.go:159] libmachine.API.Create for "multinode-124911" (driver="kvm2")
	I0914 21:59:45.737133   25747 client.go:168] LocalClient.Create starting
	I0914 21:59:45.737161   25747 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem
	I0914 21:59:45.737191   25747 main.go:141] libmachine: Decoding PEM data...
	I0914 21:59:45.737206   25747 main.go:141] libmachine: Parsing certificate...
	I0914 21:59:45.737263   25747 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem
	I0914 21:59:45.737291   25747 main.go:141] libmachine: Decoding PEM data...
	I0914 21:59:45.737309   25747 main.go:141] libmachine: Parsing certificate...
	I0914 21:59:45.737335   25747 main.go:141] libmachine: Running pre-create checks...
	I0914 21:59:45.737352   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .PreCreateCheck
	I0914 21:59:45.737496   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetConfigRaw
	I0914 21:59:45.737850   25747 main.go:141] libmachine: Creating machine...
	I0914 21:59:45.737863   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .Create
	I0914 21:59:45.737990   25747 main.go:141] libmachine: (multinode-124911-m02) Creating KVM machine...
	I0914 21:59:45.739181   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found existing default KVM network
	I0914 21:59:45.739294   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found existing private KVM network mk-multinode-124911
	I0914 21:59:45.739439   25747 main.go:141] libmachine: (multinode-124911-m02) Setting up store path in /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02 ...
	I0914 21:59:45.739454   25747 main.go:141] libmachine: (multinode-124911-m02) Building disk image from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso
	I0914 21:59:45.739526   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:45.739430   26106 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:59:45.739622   25747 main.go:141] libmachine: (multinode-124911-m02) Downloading /home/jenkins/minikube-integration/17243-6287/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso...
	I0914 21:59:45.939425   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:45.939259   26106 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa...
	I0914 21:59:46.039711   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:46.039589   26106 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/multinode-124911-m02.rawdisk...
	I0914 21:59:46.039739   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Writing magic tar header
	I0914 21:59:46.039751   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Writing SSH key tar header
	I0914 21:59:46.039759   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:46.039714   26106 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02 ...
	I0914 21:59:46.039834   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02
	I0914 21:59:46.039854   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines
	I0914 21:59:46.039872   25747 main.go:141] libmachine: (multinode-124911-m02) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02 (perms=drwx------)
	I0914 21:59:46.039899   25747 main.go:141] libmachine: (multinode-124911-m02) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines (perms=drwxr-xr-x)
	I0914 21:59:46.039911   25747 main.go:141] libmachine: (multinode-124911-m02) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube (perms=drwxr-xr-x)
	I0914 21:59:46.039928   25747 main.go:141] libmachine: (multinode-124911-m02) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287 (perms=drwxrwxr-x)
	I0914 21:59:46.039937   25747 main.go:141] libmachine: (multinode-124911-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 21:59:46.039954   25747 main.go:141] libmachine: (multinode-124911-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 21:59:46.039988   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:59:46.040012   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287
	I0914 21:59:46.040021   25747 main.go:141] libmachine: (multinode-124911-m02) Creating domain...
	I0914 21:59:46.040036   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 21:59:46.040046   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Checking permissions on dir: /home/jenkins
	I0914 21:59:46.040058   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Checking permissions on dir: /home
	I0914 21:59:46.040072   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Skipping /home - not owner
	I0914 21:59:46.041086   25747 main.go:141] libmachine: (multinode-124911-m02) define libvirt domain using xml: 
	I0914 21:59:46.041107   25747 main.go:141] libmachine: (multinode-124911-m02) <domain type='kvm'>
	I0914 21:59:46.041116   25747 main.go:141] libmachine: (multinode-124911-m02)   <name>multinode-124911-m02</name>
	I0914 21:59:46.041124   25747 main.go:141] libmachine: (multinode-124911-m02)   <memory unit='MiB'>2200</memory>
	I0914 21:59:46.041131   25747 main.go:141] libmachine: (multinode-124911-m02)   <vcpu>2</vcpu>
	I0914 21:59:46.041136   25747 main.go:141] libmachine: (multinode-124911-m02)   <features>
	I0914 21:59:46.041149   25747 main.go:141] libmachine: (multinode-124911-m02)     <acpi/>
	I0914 21:59:46.041176   25747 main.go:141] libmachine: (multinode-124911-m02)     <apic/>
	I0914 21:59:46.041188   25747 main.go:141] libmachine: (multinode-124911-m02)     <pae/>
	I0914 21:59:46.041204   25747 main.go:141] libmachine: (multinode-124911-m02)     
	I0914 21:59:46.041216   25747 main.go:141] libmachine: (multinode-124911-m02)   </features>
	I0914 21:59:46.041228   25747 main.go:141] libmachine: (multinode-124911-m02)   <cpu mode='host-passthrough'>
	I0914 21:59:46.041241   25747 main.go:141] libmachine: (multinode-124911-m02)   
	I0914 21:59:46.041256   25747 main.go:141] libmachine: (multinode-124911-m02)   </cpu>
	I0914 21:59:46.041273   25747 main.go:141] libmachine: (multinode-124911-m02)   <os>
	I0914 21:59:46.041289   25747 main.go:141] libmachine: (multinode-124911-m02)     <type>hvm</type>
	I0914 21:59:46.041301   25747 main.go:141] libmachine: (multinode-124911-m02)     <boot dev='cdrom'/>
	I0914 21:59:46.041311   25747 main.go:141] libmachine: (multinode-124911-m02)     <boot dev='hd'/>
	I0914 21:59:46.041326   25747 main.go:141] libmachine: (multinode-124911-m02)     <bootmenu enable='no'/>
	I0914 21:59:46.041338   25747 main.go:141] libmachine: (multinode-124911-m02)   </os>
	I0914 21:59:46.041346   25747 main.go:141] libmachine: (multinode-124911-m02)   <devices>
	I0914 21:59:46.041359   25747 main.go:141] libmachine: (multinode-124911-m02)     <disk type='file' device='cdrom'>
	I0914 21:59:46.041390   25747 main.go:141] libmachine: (multinode-124911-m02)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/boot2docker.iso'/>
	I0914 21:59:46.041416   25747 main.go:141] libmachine: (multinode-124911-m02)       <target dev='hdc' bus='scsi'/>
	I0914 21:59:46.041424   25747 main.go:141] libmachine: (multinode-124911-m02)       <readonly/>
	I0914 21:59:46.041434   25747 main.go:141] libmachine: (multinode-124911-m02)     </disk>
	I0914 21:59:46.041462   25747 main.go:141] libmachine: (multinode-124911-m02)     <disk type='file' device='disk'>
	I0914 21:59:46.041483   25747 main.go:141] libmachine: (multinode-124911-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 21:59:46.041502   25747 main.go:141] libmachine: (multinode-124911-m02)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/multinode-124911-m02.rawdisk'/>
	I0914 21:59:46.041516   25747 main.go:141] libmachine: (multinode-124911-m02)       <target dev='hda' bus='virtio'/>
	I0914 21:59:46.041531   25747 main.go:141] libmachine: (multinode-124911-m02)     </disk>
	I0914 21:59:46.041545   25747 main.go:141] libmachine: (multinode-124911-m02)     <interface type='network'>
	I0914 21:59:46.041560   25747 main.go:141] libmachine: (multinode-124911-m02)       <source network='mk-multinode-124911'/>
	I0914 21:59:46.041575   25747 main.go:141] libmachine: (multinode-124911-m02)       <model type='virtio'/>
	I0914 21:59:46.041604   25747 main.go:141] libmachine: (multinode-124911-m02)     </interface>
	I0914 21:59:46.041633   25747 main.go:141] libmachine: (multinode-124911-m02)     <interface type='network'>
	I0914 21:59:46.041648   25747 main.go:141] libmachine: (multinode-124911-m02)       <source network='default'/>
	I0914 21:59:46.041667   25747 main.go:141] libmachine: (multinode-124911-m02)       <model type='virtio'/>
	I0914 21:59:46.041683   25747 main.go:141] libmachine: (multinode-124911-m02)     </interface>
	I0914 21:59:46.041702   25747 main.go:141] libmachine: (multinode-124911-m02)     <serial type='pty'>
	I0914 21:59:46.041720   25747 main.go:141] libmachine: (multinode-124911-m02)       <target port='0'/>
	I0914 21:59:46.041733   25747 main.go:141] libmachine: (multinode-124911-m02)     </serial>
	I0914 21:59:46.041753   25747 main.go:141] libmachine: (multinode-124911-m02)     <console type='pty'>
	I0914 21:59:46.041779   25747 main.go:141] libmachine: (multinode-124911-m02)       <target type='serial' port='0'/>
	I0914 21:59:46.041794   25747 main.go:141] libmachine: (multinode-124911-m02)     </console>
	I0914 21:59:46.041808   25747 main.go:141] libmachine: (multinode-124911-m02)     <rng model='virtio'>
	I0914 21:59:46.041826   25747 main.go:141] libmachine: (multinode-124911-m02)       <backend model='random'>/dev/random</backend>
	I0914 21:59:46.041840   25747 main.go:141] libmachine: (multinode-124911-m02)     </rng>
	I0914 21:59:46.041856   25747 main.go:141] libmachine: (multinode-124911-m02)     
	I0914 21:59:46.041869   25747 main.go:141] libmachine: (multinode-124911-m02)     
	I0914 21:59:46.041884   25747 main.go:141] libmachine: (multinode-124911-m02)   </devices>
	I0914 21:59:46.041904   25747 main.go:141] libmachine: (multinode-124911-m02) </domain>
	I0914 21:59:46.041917   25747 main.go:141] libmachine: (multinode-124911-m02) 
	I0914 21:59:46.048503   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:26:84:22 in network default
	I0914 21:59:46.049021   25747 main.go:141] libmachine: (multinode-124911-m02) Ensuring networks are active...
	I0914 21:59:46.049037   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:46.049650   25747 main.go:141] libmachine: (multinode-124911-m02) Ensuring network default is active
	I0914 21:59:46.049968   25747 main.go:141] libmachine: (multinode-124911-m02) Ensuring network mk-multinode-124911 is active
	I0914 21:59:46.050296   25747 main.go:141] libmachine: (multinode-124911-m02) Getting domain xml...
	I0914 21:59:46.050992   25747 main.go:141] libmachine: (multinode-124911-m02) Creating domain...
	I0914 21:59:47.256939   25747 main.go:141] libmachine: (multinode-124911-m02) Waiting to get IP...
	I0914 21:59:47.257761   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:47.258115   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:47.258142   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:47.258094   26106 retry.go:31] will retry after 288.327386ms: waiting for machine to come up
	I0914 21:59:47.547529   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:47.548082   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:47.548118   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:47.548018   26106 retry.go:31] will retry after 301.005398ms: waiting for machine to come up
	I0914 21:59:47.850485   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:47.850862   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:47.850888   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:47.850810   26106 retry.go:31] will retry after 367.764142ms: waiting for machine to come up
	I0914 21:59:48.220367   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:48.220768   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:48.220798   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:48.220717   26106 retry.go:31] will retry after 596.215395ms: waiting for machine to come up
	I0914 21:59:48.818440   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:48.818823   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:48.818854   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:48.818769   26106 retry.go:31] will retry after 554.889372ms: waiting for machine to come up
	I0914 21:59:49.375429   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:49.375862   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:49.375892   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:49.375803   26106 retry.go:31] will retry after 757.401361ms: waiting for machine to come up
	I0914 21:59:50.134620   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:50.135176   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:50.135209   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:50.135118   26106 retry.go:31] will retry after 1.119310637s: waiting for machine to come up
	I0914 21:59:51.256001   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:51.256466   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:51.256490   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:51.256442   26106 retry.go:31] will retry after 1.005821438s: waiting for machine to come up
	I0914 21:59:52.263568   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:52.263953   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:52.263987   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:52.263906   26106 retry.go:31] will retry after 1.230477497s: waiting for machine to come up
	I0914 21:59:53.496282   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:53.496730   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:53.496762   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:53.496668   26106 retry.go:31] will retry after 1.72171357s: waiting for machine to come up
	I0914 21:59:55.220761   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:55.221286   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:55.221314   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:55.221222   26106 retry.go:31] will retry after 2.046969227s: waiting for machine to come up
	I0914 21:59:57.269878   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:57.270479   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:57.270518   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:57.270366   26106 retry.go:31] will retry after 2.594753314s: waiting for machine to come up
	I0914 21:59:59.867344   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 21:59:59.867787   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 21:59:59.867814   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 21:59:59.867740   26106 retry.go:31] will retry after 3.412135573s: waiting for machine to come up
	I0914 22:00:03.281101   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:03.281541   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find current IP address of domain multinode-124911-m02 in network mk-multinode-124911
	I0914 22:00:03.281562   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | I0914 22:00:03.281505   26106 retry.go:31] will retry after 4.003908684s: waiting for machine to come up
	I0914 22:00:07.288204   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:07.288695   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has current primary IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:07.288717   25747 main.go:141] libmachine: (multinode-124911-m02) Found IP for machine: 192.168.39.254
	I0914 22:00:07.288733   25747 main.go:141] libmachine: (multinode-124911-m02) Reserving static IP address...
	I0914 22:00:07.288980   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find host DHCP lease matching {name: "multinode-124911-m02", mac: "52:54:00:55:38:83", ip: "192.168.39.254"} in network mk-multinode-124911
	I0914 22:00:07.357422   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Getting to WaitForSSH function...
	I0914 22:00:07.357454   25747 main.go:141] libmachine: (multinode-124911-m02) Reserved static IP address: 192.168.39.254
	I0914 22:00:07.357470   25747 main.go:141] libmachine: (multinode-124911-m02) Waiting for SSH to be available...
	I0914 22:00:07.360514   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:07.360866   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911
	I0914 22:00:07.360896   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | unable to find defined IP address of network mk-multinode-124911 interface with MAC address 52:54:00:55:38:83
	I0914 22:00:07.360989   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Using SSH client type: external
	I0914 22:00:07.361016   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa (-rw-------)
	I0914 22:00:07.361052   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:00:07.361070   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | About to run SSH command:
	I0914 22:00:07.361087   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | exit 0
	I0914 22:00:07.364759   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | SSH cmd err, output: exit status 255: 
	I0914 22:00:07.364779   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0914 22:00:07.364787   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | command : exit 0
	I0914 22:00:07.364793   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | err     : exit status 255
	I0914 22:00:07.364800   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | output  : 
	I0914 22:00:10.366930   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Getting to WaitForSSH function...
	I0914 22:00:10.369183   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.369623   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:10.369649   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.369845   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Using SSH client type: external
	I0914 22:00:10.369879   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa (-rw-------)
	I0914 22:00:10.369914   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:00:10.369938   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | About to run SSH command:
	I0914 22:00:10.369955   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | exit 0
	I0914 22:00:10.455052   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | SSH cmd err, output: <nil>: 
	I0914 22:00:10.455257   25747 main.go:141] libmachine: (multinode-124911-m02) KVM machine creation complete!
	I0914 22:00:10.455566   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetConfigRaw
	I0914 22:00:10.456106   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:00:10.456332   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:00:10.456496   25747 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 22:00:10.456515   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetState
	I0914 22:00:10.457701   25747 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 22:00:10.457719   25747 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 22:00:10.457732   25747 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 22:00:10.457746   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:00:10.460019   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.460370   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:10.460403   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.460518   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:00:10.460684   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:10.460837   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:10.460976   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:00:10.461136   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 22:00:10.461473   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0914 22:00:10.461483   25747 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 22:00:10.570817   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:00:10.570846   25747 main.go:141] libmachine: Detecting the provisioner...
	I0914 22:00:10.570859   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:00:10.573514   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.573865   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:10.573900   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.574027   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:00:10.574234   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:10.574454   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:10.574557   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:00:10.574730   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 22:00:10.575030   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0914 22:00:10.575042   25747 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 22:00:10.688183   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g52d8811-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0914 22:00:10.688256   25747 main.go:141] libmachine: found compatible host: buildroot
	I0914 22:00:10.688265   25747 main.go:141] libmachine: Provisioning with buildroot...
	I0914 22:00:10.688278   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetMachineName
	I0914 22:00:10.688597   25747 buildroot.go:166] provisioning hostname "multinode-124911-m02"
	I0914 22:00:10.688628   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetMachineName
	I0914 22:00:10.688833   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:00:10.691538   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.691915   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:10.691949   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.692082   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:00:10.692263   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:10.692429   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:10.692541   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:00:10.692665   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 22:00:10.693002   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0914 22:00:10.693022   25747 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-124911-m02 && echo "multinode-124911-m02" | sudo tee /etc/hostname
	I0914 22:00:10.816715   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-124911-m02
	
	I0914 22:00:10.816746   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:00:10.819365   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.819726   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:10.819756   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.819949   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:00:10.820135   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:10.820286   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:10.820455   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:00:10.820624   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 22:00:10.821148   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0914 22:00:10.821181   25747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-124911-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-124911-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-124911-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:00:10.939665   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:00:10.939694   25747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:00:10.939720   25747 buildroot.go:174] setting up certificates
	I0914 22:00:10.939728   25747 provision.go:83] configureAuth start
	I0914 22:00:10.939736   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetMachineName
	I0914 22:00:10.939972   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetIP
	I0914 22:00:10.942229   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.942495   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:10.942540   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.942637   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:00:10.945043   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.945319   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:10.945343   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:10.945499   25747 provision.go:138] copyHostCerts
	I0914 22:00:10.945525   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:00:10.945551   25747 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:00:10.945560   25747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:00:10.945631   25747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:00:10.945735   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:00:10.945756   25747 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:00:10.945760   25747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:00:10.945784   25747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:00:10.945821   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:00:10.945836   25747 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:00:10.945842   25747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:00:10.945860   25747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:00:10.945899   25747 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.multinode-124911-m02 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube multinode-124911-m02]
	I0914 22:00:11.034084   25747 provision.go:172] copyRemoteCerts
	I0914 22:00:11.034164   25747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:00:11.034192   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:00:11.036793   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.037122   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:11.037152   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.037324   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:00:11.037493   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:11.037728   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:00:11.037886   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa Username:docker}
	I0914 22:00:11.119620   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 22:00:11.119706   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0914 22:00:11.141308   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 22:00:11.141384   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:00:11.162594   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 22:00:11.162657   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:00:11.182815   25747 provision.go:86] duration metric: configureAuth took 243.073876ms
	I0914 22:00:11.182840   25747 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:00:11.183017   25747 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:00:11.183100   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:00:11.185433   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.185772   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:11.185807   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.185928   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:00:11.186103   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:11.186271   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:11.186408   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:00:11.186559   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 22:00:11.186844   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0914 22:00:11.186861   25747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:00:11.462973   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:00:11.463021   25747 main.go:141] libmachine: Checking connection to Docker...
	I0914 22:00:11.463036   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetURL
	I0914 22:00:11.464148   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | Using libvirt version 6000000
	I0914 22:00:11.466320   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.466616   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:11.466651   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.466743   25747 main.go:141] libmachine: Docker is up and running!
	I0914 22:00:11.466759   25747 main.go:141] libmachine: Reticulating splines...
	I0914 22:00:11.466765   25747 client.go:171] LocalClient.Create took 25.729625828s
	I0914 22:00:11.466784   25747 start.go:167] duration metric: libmachine.API.Create for "multinode-124911" took 25.729675729s
	I0914 22:00:11.466793   25747 start.go:300] post-start starting for "multinode-124911-m02" (driver="kvm2")
	I0914 22:00:11.466801   25747 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:00:11.466826   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:00:11.467063   25747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:00:11.467089   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:00:11.468788   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.469127   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:11.469163   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.469327   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:00:11.469494   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:11.469597   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:00:11.469775   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa Username:docker}
	I0914 22:00:11.552701   25747 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:00:11.556396   25747 command_runner.go:130] > NAME=Buildroot
	I0914 22:00:11.556418   25747 command_runner.go:130] > VERSION=2021.02.12-1-g52d8811-dirty
	I0914 22:00:11.556425   25747 command_runner.go:130] > ID=buildroot
	I0914 22:00:11.556442   25747 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 22:00:11.556450   25747 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 22:00:11.556550   25747 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:00:11.556572   25747 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:00:11.556651   25747 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:00:11.556779   25747 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:00:11.556794   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /etc/ssl/certs/134852.pem
	I0914 22:00:11.556924   25747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:00:11.565585   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:00:11.585622   25747 start.go:303] post-start completed in 118.82055ms
	I0914 22:00:11.585660   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetConfigRaw
	I0914 22:00:11.586201   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetIP
	I0914 22:00:11.588433   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.588788   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:11.588819   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.588995   25747 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 22:00:11.589158   25747 start.go:128] duration metric: createHost completed in 25.869396968s
	I0914 22:00:11.589179   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:00:11.591331   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.591697   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:11.591722   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.591885   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:00:11.592053   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:11.592194   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:11.592310   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:00:11.592426   25747 main.go:141] libmachine: Using SSH client type: native
	I0914 22:00:11.592774   25747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0914 22:00:11.592795   25747 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:00:11.703493   25747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694728811.679800732
	
	I0914 22:00:11.703515   25747 fix.go:206] guest clock: 1694728811.679800732
	I0914 22:00:11.703523   25747 fix.go:219] Guest: 2023-09-14 22:00:11.679800732 +0000 UTC Remote: 2023-09-14 22:00:11.589168732 +0000 UTC m=+94.325132461 (delta=90.632ms)
	I0914 22:00:11.703536   25747 fix.go:190] guest clock delta is within tolerance: 90.632ms
	I0914 22:00:11.703541   25747 start.go:83] releasing machines lock for "multinode-124911-m02", held for 25.983850589s
	I0914 22:00:11.703567   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:00:11.703846   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetIP
	I0914 22:00:11.706376   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.706733   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:11.706756   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.708978   25747 out.go:177] * Found network options:
	I0914 22:00:11.710267   25747 out.go:177]   - NO_PROXY=192.168.39.116
	W0914 22:00:11.711551   25747 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 22:00:11.711581   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:00:11.712132   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:00:11.712306   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:00:11.712387   25747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:00:11.712431   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	W0914 22:00:11.712528   25747 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 22:00:11.712602   25747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:00:11.712625   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:00:11.715164   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.715530   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.715564   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:11.715591   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.715746   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:00:11.715945   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:11.715980   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:11.716005   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:11.716140   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:00:11.716152   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:00:11.716336   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:00:11.716332   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa Username:docker}
	I0914 22:00:11.716447   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:00:11.716550   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa Username:docker}
	I0914 22:00:11.827289   25747 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 22:00:11.949781   25747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 22:00:11.955283   25747 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 22:00:11.955400   25747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:00:11.955450   25747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:00:11.969764   25747 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0914 22:00:11.969790   25747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:00:11.969800   25747 start.go:469] detecting cgroup driver to use...
	I0914 22:00:11.969855   25747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:00:11.982818   25747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:00:11.994525   25747 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:00:11.994580   25747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:00:12.006305   25747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:00:12.018088   25747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:00:12.030803   25747 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0914 22:00:12.122479   25747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:00:12.236552   25747 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0914 22:00:12.236597   25747 docker.go:212] disabling docker service ...
	I0914 22:00:12.236644   25747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:00:12.249358   25747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:00:12.260111   25747 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0914 22:00:12.260201   25747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:00:12.365067   25747 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0914 22:00:12.365132   25747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:00:12.379258   25747 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0914 22:00:12.379663   25747 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0914 22:00:12.465836   25747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:00:12.478538   25747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:00:12.494491   25747 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 22:00:12.494522   25747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:00:12.494562   25747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:00:12.503881   25747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:00:12.503928   25747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:00:12.513089   25747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:00:12.522313   25747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:00:12.532388   25747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:00:12.542660   25747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:00:12.551548   25747 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:00:12.551583   25747 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:00:12.551619   25747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:00:12.565554   25747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:00:12.574798   25747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:00:12.675114   25747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:00:12.832882   25747 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:00:12.832962   25747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:00:12.837728   25747 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 22:00:12.837745   25747 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 22:00:12.837752   25747 command_runner.go:130] > Device: 16h/22d	Inode: 744         Links: 1
	I0914 22:00:12.837758   25747 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:00:12.837763   25747 command_runner.go:130] > Access: 2023-09-14 22:00:12.798581404 +0000
	I0914 22:00:12.837769   25747 command_runner.go:130] > Modify: 2023-09-14 22:00:12.798581404 +0000
	I0914 22:00:12.837773   25747 command_runner.go:130] > Change: 2023-09-14 22:00:12.798581404 +0000
	I0914 22:00:12.837777   25747 command_runner.go:130] >  Birth: -
	I0914 22:00:12.837862   25747 start.go:537] Will wait 60s for crictl version
	I0914 22:00:12.837919   25747 ssh_runner.go:195] Run: which crictl
	I0914 22:00:12.841024   25747 command_runner.go:130] > /usr/bin/crictl
	I0914 22:00:12.841429   25747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:00:12.867428   25747 command_runner.go:130] > Version:  0.1.0
	I0914 22:00:12.867444   25747 command_runner.go:130] > RuntimeName:  cri-o
	I0914 22:00:12.867449   25747 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0914 22:00:12.867455   25747 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0914 22:00:12.867644   25747 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:00:12.867714   25747 ssh_runner.go:195] Run: crio --version
	I0914 22:00:12.907588   25747 command_runner.go:130] > crio version 1.24.1
	I0914 22:00:12.907604   25747 command_runner.go:130] > Version:          1.24.1
	I0914 22:00:12.907611   25747 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0914 22:00:12.907615   25747 command_runner.go:130] > GitTreeState:     dirty
	I0914 22:00:12.907623   25747 command_runner.go:130] > BuildDate:        2023-09-13T22:47:54Z
	I0914 22:00:12.907628   25747 command_runner.go:130] > GoVersion:        go1.19.9
	I0914 22:00:12.907632   25747 command_runner.go:130] > Compiler:         gc
	I0914 22:00:12.907636   25747 command_runner.go:130] > Platform:         linux/amd64
	I0914 22:00:12.907641   25747 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:00:12.907649   25747 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:00:12.907653   25747 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:00:12.907657   25747 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:00:12.907861   25747 ssh_runner.go:195] Run: crio --version
	I0914 22:00:12.952782   25747 command_runner.go:130] > crio version 1.24.1
	I0914 22:00:12.952802   25747 command_runner.go:130] > Version:          1.24.1
	I0914 22:00:12.952809   25747 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0914 22:00:12.952813   25747 command_runner.go:130] > GitTreeState:     dirty
	I0914 22:00:12.952818   25747 command_runner.go:130] > BuildDate:        2023-09-13T22:47:54Z
	I0914 22:00:12.952823   25747 command_runner.go:130] > GoVersion:        go1.19.9
	I0914 22:00:12.952827   25747 command_runner.go:130] > Compiler:         gc
	I0914 22:00:12.952831   25747 command_runner.go:130] > Platform:         linux/amd64
	I0914 22:00:12.952837   25747 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:00:12.952844   25747 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:00:12.952849   25747 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:00:12.952853   25747 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:00:12.955676   25747 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:00:12.957075   25747 out.go:177]   - env NO_PROXY=192.168.39.116
	I0914 22:00:12.958457   25747 main.go:141] libmachine: (multinode-124911-m02) Calling .GetIP
	I0914 22:00:12.961196   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:12.961575   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:00:12.961604   25747 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:00:12.961809   25747 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:00:12.965781   25747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:00:12.976324   25747 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911 for IP: 192.168.39.254
	I0914 22:00:12.976358   25747 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:00:12.976527   25747 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:00:12.976585   25747 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:00:12.976604   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 22:00:12.976631   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 22:00:12.976653   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 22:00:12.976675   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 22:00:12.976760   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:00:12.976807   25747 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:00:12.976824   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:00:12.976872   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:00:12.976912   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:00:12.976949   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:00:12.977016   25747 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:00:12.977064   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:00:12.977087   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem -> /usr/share/ca-certificates/13485.pem
	I0914 22:00:12.977108   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /usr/share/ca-certificates/134852.pem
	I0914 22:00:12.977579   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:00:12.998281   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:00:13.018606   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:00:13.037860   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:00:13.057116   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:00:13.076445   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:00:13.097630   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:00:13.118622   25747 ssh_runner.go:195] Run: openssl version
	I0914 22:00:13.123499   25747 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0914 22:00:13.123842   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:00:13.134153   25747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:00:13.138067   25747 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:00:13.138331   25747 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:00:13.138371   25747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:00:13.143119   25747 command_runner.go:130] > b5213941
	I0914 22:00:13.143383   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:00:13.153558   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:00:13.163838   25747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:00:13.167685   25747 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:00:13.168068   25747 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:00:13.168109   25747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:00:13.173022   25747 command_runner.go:130] > 51391683
	I0914 22:00:13.173152   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:00:13.184071   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:00:13.194936   25747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:00:13.199039   25747 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:00:13.199061   25747 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:00:13.199089   25747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:00:13.204181   25747 command_runner.go:130] > 3ec20f2e
	I0914 22:00:13.204232   25747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:00:13.214646   25747 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:00:13.218210   25747 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:00:13.218364   25747 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:00:13.218460   25747 ssh_runner.go:195] Run: crio config
	I0914 22:00:13.274730   25747 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 22:00:13.274764   25747 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 22:00:13.274779   25747 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 22:00:13.274786   25747 command_runner.go:130] > #
	I0914 22:00:13.274797   25747 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 22:00:13.274809   25747 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 22:00:13.274823   25747 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 22:00:13.274839   25747 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 22:00:13.274850   25747 command_runner.go:130] > # reload'.
	I0914 22:00:13.274866   25747 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 22:00:13.274882   25747 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 22:00:13.274897   25747 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 22:00:13.274912   25747 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 22:00:13.274919   25747 command_runner.go:130] > [crio]
	I0914 22:00:13.274933   25747 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 22:00:13.274943   25747 command_runner.go:130] > # containers images, in this directory.
	I0914 22:00:13.274962   25747 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0914 22:00:13.274982   25747 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 22:00:13.274995   25747 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0914 22:00:13.275011   25747 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 22:00:13.275026   25747 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 22:00:13.275038   25747 command_runner.go:130] > storage_driver = "overlay"
	I0914 22:00:13.275048   25747 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 22:00:13.275062   25747 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 22:00:13.275074   25747 command_runner.go:130] > storage_option = [
	I0914 22:00:13.275085   25747 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0914 22:00:13.275096   25747 command_runner.go:130] > ]
	I0914 22:00:13.275108   25747 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 22:00:13.275123   25747 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 22:00:13.275136   25747 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 22:00:13.275147   25747 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 22:00:13.275163   25747 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 22:00:13.275176   25747 command_runner.go:130] > # always happen on a node reboot
	I0914 22:00:13.275189   25747 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 22:00:13.275203   25747 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 22:00:13.275218   25747 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 22:00:13.275237   25747 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 22:00:13.275250   25747 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0914 22:00:13.275265   25747 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 22:00:13.275283   25747 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 22:00:13.275296   25747 command_runner.go:130] > # internal_wipe = true
	I0914 22:00:13.275311   25747 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 22:00:13.275326   25747 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 22:00:13.275341   25747 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 22:00:13.275355   25747 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 22:00:13.275366   25747 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 22:00:13.275377   25747 command_runner.go:130] > [crio.api]
	I0914 22:00:13.275388   25747 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 22:00:13.275398   25747 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 22:00:13.275404   25747 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 22:00:13.275454   25747 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 22:00:13.275484   25747 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 22:00:13.275498   25747 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 22:00:13.275509   25747 command_runner.go:130] > # stream_port = "0"
	I0914 22:00:13.275521   25747 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 22:00:13.275534   25747 command_runner.go:130] > # stream_enable_tls = false
	I0914 22:00:13.275549   25747 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 22:00:13.275561   25747 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 22:00:13.275576   25747 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 22:00:13.275591   25747 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 22:00:13.275600   25747 command_runner.go:130] > # minutes.
	I0914 22:00:13.275605   25747 command_runner.go:130] > # stream_tls_cert = ""
	I0914 22:00:13.275612   25747 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 22:00:13.275620   25747 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 22:00:13.275625   25747 command_runner.go:130] > # stream_tls_key = ""
	I0914 22:00:13.275637   25747 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 22:00:13.275651   25747 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 22:00:13.275669   25747 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 22:00:13.275684   25747 command_runner.go:130] > # stream_tls_ca = ""
	I0914 22:00:13.275701   25747 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:00:13.275730   25747 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0914 22:00:13.275748   25747 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:00:13.275761   25747 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0914 22:00:13.275785   25747 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 22:00:13.275801   25747 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 22:00:13.275813   25747 command_runner.go:130] > [crio.runtime]
	I0914 22:00:13.275827   25747 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 22:00:13.275841   25747 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 22:00:13.275853   25747 command_runner.go:130] > # "nofile=1024:2048"
	I0914 22:00:13.275864   25747 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 22:00:13.275875   25747 command_runner.go:130] > # default_ulimits = [
	I0914 22:00:13.275883   25747 command_runner.go:130] > # ]
	I0914 22:00:13.275897   25747 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 22:00:13.275909   25747 command_runner.go:130] > # no_pivot = false
	I0914 22:00:13.275928   25747 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 22:00:13.275947   25747 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 22:00:13.275959   25747 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 22:00:13.275974   25747 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 22:00:13.275988   25747 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 22:00:13.276005   25747 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:00:13.276017   25747 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0914 22:00:13.276027   25747 command_runner.go:130] > # Cgroup setting for conmon
	I0914 22:00:13.276035   25747 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 22:00:13.276043   25747 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 22:00:13.276049   25747 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 22:00:13.276057   25747 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 22:00:13.276066   25747 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:00:13.276072   25747 command_runner.go:130] > conmon_env = [
	I0914 22:00:13.276083   25747 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 22:00:13.276093   25747 command_runner.go:130] > ]
	I0914 22:00:13.276107   25747 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 22:00:13.276121   25747 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 22:00:13.276134   25747 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 22:00:13.276142   25747 command_runner.go:130] > # default_env = [
	I0914 22:00:13.276156   25747 command_runner.go:130] > # ]
	I0914 22:00:13.276168   25747 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 22:00:13.276176   25747 command_runner.go:130] > # selinux = false
	I0914 22:00:13.276192   25747 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 22:00:13.276207   25747 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 22:00:13.276218   25747 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 22:00:13.276233   25747 command_runner.go:130] > # seccomp_profile = ""
	I0914 22:00:13.276243   25747 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 22:00:13.276258   25747 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 22:00:13.276274   25747 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 22:00:13.276286   25747 command_runner.go:130] > # which might increase security.
	I0914 22:00:13.276298   25747 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0914 22:00:13.276310   25747 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 22:00:13.276323   25747 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 22:00:13.276338   25747 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 22:00:13.276353   25747 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 22:00:13.276366   25747 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:00:13.276379   25747 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 22:00:13.276394   25747 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 22:00:13.276405   25747 command_runner.go:130] > # the cgroup blockio controller.
	I0914 22:00:13.276417   25747 command_runner.go:130] > # blockio_config_file = ""
	I0914 22:00:13.276433   25747 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 22:00:13.276445   25747 command_runner.go:130] > # irqbalance daemon.
	I0914 22:00:13.276455   25747 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 22:00:13.276504   25747 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 22:00:13.276519   25747 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:00:13.276531   25747 command_runner.go:130] > # rdt_config_file = ""
	I0914 22:00:13.276542   25747 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 22:00:13.276555   25747 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 22:00:13.276570   25747 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 22:00:13.276582   25747 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 22:00:13.276597   25747 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 22:00:13.276612   25747 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 22:00:13.276627   25747 command_runner.go:130] > # will be added.
	I0914 22:00:13.276638   25747 command_runner.go:130] > # default_capabilities = [
	I0914 22:00:13.276647   25747 command_runner.go:130] > # 	"CHOWN",
	I0914 22:00:13.276652   25747 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 22:00:13.276663   25747 command_runner.go:130] > # 	"FSETID",
	I0914 22:00:13.276674   25747 command_runner.go:130] > # 	"FOWNER",
	I0914 22:00:13.276681   25747 command_runner.go:130] > # 	"SETGID",
	I0914 22:00:13.276693   25747 command_runner.go:130] > # 	"SETUID",
	I0914 22:00:13.276722   25747 command_runner.go:130] > # 	"SETPCAP",
	I0914 22:00:13.276735   25747 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 22:00:13.276743   25747 command_runner.go:130] > # 	"KILL",
	I0914 22:00:13.276753   25747 command_runner.go:130] > # ]
	I0914 22:00:13.276765   25747 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 22:00:13.276780   25747 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:00:13.276792   25747 command_runner.go:130] > # default_sysctls = [
	I0914 22:00:13.276800   25747 command_runner.go:130] > # ]
	I0914 22:00:13.276808   25747 command_runner.go:130] > # List of devices on the host that a
	I0914 22:00:13.276820   25747 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 22:00:13.276832   25747 command_runner.go:130] > # allowed_devices = [
	I0914 22:00:13.276844   25747 command_runner.go:130] > # 	"/dev/fuse",
	I0914 22:00:13.276854   25747 command_runner.go:130] > # ]
	I0914 22:00:13.276864   25747 command_runner.go:130] > # List of additional devices. specified as
	I0914 22:00:13.276877   25747 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 22:00:13.276891   25747 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 22:00:13.276917   25747 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:00:13.276931   25747 command_runner.go:130] > # additional_devices = [
	I0914 22:00:13.276938   25747 command_runner.go:130] > # ]
	I0914 22:00:13.276948   25747 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 22:00:13.276959   25747 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 22:00:13.276967   25747 command_runner.go:130] > # 	"/etc/cdi",
	I0914 22:00:13.276978   25747 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 22:00:13.276985   25747 command_runner.go:130] > # ]
	I0914 22:00:13.276996   25747 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 22:00:13.277011   25747 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 22:00:13.277019   25747 command_runner.go:130] > # Defaults to false.
	I0914 22:00:13.277035   25747 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 22:00:13.277049   25747 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 22:00:13.277065   25747 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 22:00:13.277074   25747 command_runner.go:130] > # hooks_dir = [
	I0914 22:00:13.277084   25747 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 22:00:13.277090   25747 command_runner.go:130] > # ]
	I0914 22:00:13.277106   25747 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 22:00:13.277122   25747 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 22:00:13.277137   25747 command_runner.go:130] > # its default mounts from the following two files:
	I0914 22:00:13.277147   25747 command_runner.go:130] > #
	I0914 22:00:13.277160   25747 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 22:00:13.277172   25747 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 22:00:13.277183   25747 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 22:00:13.277194   25747 command_runner.go:130] > #
	I0914 22:00:13.277205   25747 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 22:00:13.277222   25747 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 22:00:13.277242   25747 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 22:00:13.277255   25747 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 22:00:13.277265   25747 command_runner.go:130] > #
	I0914 22:00:13.277276   25747 command_runner.go:130] > # default_mounts_file = ""
	I0914 22:00:13.277296   25747 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 22:00:13.277312   25747 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 22:00:13.277325   25747 command_runner.go:130] > pids_limit = 1024
	I0914 22:00:13.277340   25747 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 22:00:13.277355   25747 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 22:00:13.277371   25747 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 22:00:13.277386   25747 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 22:00:13.277398   25747 command_runner.go:130] > # log_size_max = -1
	I0914 22:00:13.277414   25747 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0914 22:00:13.277427   25747 command_runner.go:130] > # log_to_journald = false
	I0914 22:00:13.277442   25747 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 22:00:13.277456   25747 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 22:00:13.277468   25747 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 22:00:13.277479   25747 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 22:00:13.277491   25747 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 22:00:13.277503   25747 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 22:00:13.277518   25747 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 22:00:13.277530   25747 command_runner.go:130] > # read_only = false
	I0914 22:00:13.277545   25747 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 22:00:13.277559   25747 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 22:00:13.277569   25747 command_runner.go:130] > # live configuration reload.
	I0914 22:00:13.277579   25747 command_runner.go:130] > # log_level = "info"
	I0914 22:00:13.277594   25747 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 22:00:13.277608   25747 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:00:13.277620   25747 command_runner.go:130] > # log_filter = ""
	I0914 22:00:13.277635   25747 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 22:00:13.277677   25747 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 22:00:13.277689   25747 command_runner.go:130] > # separated by comma.
	I0914 22:00:13.277698   25747 command_runner.go:130] > # uid_mappings = ""
	I0914 22:00:13.277717   25747 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 22:00:13.277732   25747 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 22:00:13.277744   25747 command_runner.go:130] > # separated by comma.
	I0914 22:00:13.277755   25747 command_runner.go:130] > # gid_mappings = ""
	I0914 22:00:13.277766   25747 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 22:00:13.277781   25747 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:00:13.277796   25747 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:00:13.277810   25747 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 22:00:13.277825   25747 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 22:00:13.277841   25747 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:00:13.277855   25747 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:00:13.277865   25747 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 22:00:13.277879   25747 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 22:00:13.277894   25747 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 22:00:13.277909   25747 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 22:00:13.277921   25747 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 22:00:13.277936   25747 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 22:00:13.277951   25747 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 22:00:13.277962   25747 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 22:00:13.277974   25747 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 22:00:13.277986   25747 command_runner.go:130] > drop_infra_ctr = false
	I0914 22:00:13.278001   25747 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 22:00:13.278016   25747 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 22:00:13.278033   25747 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 22:00:13.278045   25747 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 22:00:13.278058   25747 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 22:00:13.278071   25747 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 22:00:13.278084   25747 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 22:00:13.278101   25747 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 22:00:13.278114   25747 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0914 22:00:13.278130   25747 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 22:00:13.278145   25747 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0914 22:00:13.278156   25747 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0914 22:00:13.278169   25747 command_runner.go:130] > # default_runtime = "runc"
	I0914 22:00:13.278184   25747 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 22:00:13.278202   25747 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 22:00:13.278221   25747 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0914 22:00:13.278235   25747 command_runner.go:130] > # creation as a file is not desired either.
	I0914 22:00:13.278250   25747 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 22:00:13.278263   25747 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 22:00:13.278276   25747 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 22:00:13.278287   25747 command_runner.go:130] > # ]
	I0914 22:00:13.278303   25747 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 22:00:13.278318   25747 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 22:00:13.278333   25747 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0914 22:00:13.278345   25747 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0914 22:00:13.278356   25747 command_runner.go:130] > #
	I0914 22:00:13.278370   25747 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0914 22:00:13.278379   25747 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0914 22:00:13.278391   25747 command_runner.go:130] > #  runtime_type = "oci"
	I0914 22:00:13.278404   25747 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0914 22:00:13.278417   25747 command_runner.go:130] > #  privileged_without_host_devices = false
	I0914 22:00:13.278430   25747 command_runner.go:130] > #  allowed_annotations = []
	I0914 22:00:13.278439   25747 command_runner.go:130] > # Where:
	I0914 22:00:13.278446   25747 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0914 22:00:13.278461   25747 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0914 22:00:13.278476   25747 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 22:00:13.278490   25747 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 22:00:13.278502   25747 command_runner.go:130] > #   in $PATH.
	I0914 22:00:13.278517   25747 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0914 22:00:13.278530   25747 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 22:00:13.278544   25747 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0914 22:00:13.278552   25747 command_runner.go:130] > #   state.
	I0914 22:00:13.278568   25747 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 22:00:13.278583   25747 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 22:00:13.278599   25747 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 22:00:13.278613   25747 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 22:00:13.278628   25747 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 22:00:13.278663   25747 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 22:00:13.278679   25747 command_runner.go:130] > #   The currently recognized values are:
	I0914 22:00:13.278691   25747 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 22:00:13.278713   25747 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 22:00:13.278728   25747 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 22:00:13.278742   25747 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 22:00:13.278755   25747 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 22:00:13.278770   25747 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 22:00:13.278787   25747 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 22:00:13.278806   25747 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0914 22:00:13.278815   25747 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 22:00:13.278823   25747 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 22:00:13.278831   25747 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0914 22:00:13.278842   25747 command_runner.go:130] > runtime_type = "oci"
	I0914 22:00:13.278854   25747 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 22:00:13.278865   25747 command_runner.go:130] > runtime_config_path = ""
	I0914 22:00:13.278877   25747 command_runner.go:130] > monitor_path = ""
	I0914 22:00:13.278888   25747 command_runner.go:130] > monitor_cgroup = ""
	I0914 22:00:13.278896   25747 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 22:00:13.278911   25747 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0914 22:00:13.278922   25747 command_runner.go:130] > # running containers
	I0914 22:00:13.278934   25747 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0914 22:00:13.278949   25747 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0914 22:00:13.278986   25747 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0914 22:00:13.278999   25747 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0914 22:00:13.279006   25747 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0914 22:00:13.279011   25747 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0914 22:00:13.279020   25747 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0914 22:00:13.279025   25747 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0914 22:00:13.279030   25747 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0914 22:00:13.279034   25747 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0914 22:00:13.279041   25747 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 22:00:13.279049   25747 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 22:00:13.279055   25747 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 22:00:13.279065   25747 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 22:00:13.279073   25747 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 22:00:13.279081   25747 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 22:00:13.279096   25747 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 22:00:13.279113   25747 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 22:00:13.279123   25747 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 22:00:13.279134   25747 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 22:00:13.279147   25747 command_runner.go:130] > # Example:
	I0914 22:00:13.279157   25747 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 22:00:13.279170   25747 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 22:00:13.279183   25747 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 22:00:13.279195   25747 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 22:00:13.279206   25747 command_runner.go:130] > # cpuset = 0
	I0914 22:00:13.279217   25747 command_runner.go:130] > # cpushares = "0-1"
	I0914 22:00:13.279225   25747 command_runner.go:130] > # Where:
	I0914 22:00:13.279238   25747 command_runner.go:130] > # The workload name is workload-type.
	I0914 22:00:13.279255   25747 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 22:00:13.279269   25747 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 22:00:13.279282   25747 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 22:00:13.279299   25747 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 22:00:13.279315   25747 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 22:00:13.279325   25747 command_runner.go:130] > # 
	I0914 22:00:13.279338   25747 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 22:00:13.279349   25747 command_runner.go:130] > #
	I0914 22:00:13.279362   25747 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 22:00:13.279368   25747 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 22:00:13.279378   25747 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 22:00:13.279386   25747 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 22:00:13.279395   25747 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 22:00:13.279401   25747 command_runner.go:130] > [crio.image]
	I0914 22:00:13.279408   25747 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 22:00:13.279415   25747 command_runner.go:130] > # default_transport = "docker://"
	I0914 22:00:13.279421   25747 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 22:00:13.279430   25747 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:00:13.279437   25747 command_runner.go:130] > # global_auth_file = ""
	I0914 22:00:13.279442   25747 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 22:00:13.279455   25747 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:00:13.279474   25747 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0914 22:00:13.279487   25747 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 22:00:13.279504   25747 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:00:13.279517   25747 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:00:13.279530   25747 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 22:00:13.279545   25747 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 22:00:13.279556   25747 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 22:00:13.279563   25747 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 22:00:13.279571   25747 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 22:00:13.279576   25747 command_runner.go:130] > # pause_command = "/pause"
	I0914 22:00:13.279582   25747 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 22:00:13.279591   25747 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 22:00:13.279597   25747 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 22:00:13.279606   25747 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 22:00:13.279611   25747 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 22:00:13.279618   25747 command_runner.go:130] > # signature_policy = ""
	I0914 22:00:13.279624   25747 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 22:00:13.279633   25747 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 22:00:13.279637   25747 command_runner.go:130] > # changing them here.
	I0914 22:00:13.279641   25747 command_runner.go:130] > # insecure_registries = [
	I0914 22:00:13.279646   25747 command_runner.go:130] > # ]
	I0914 22:00:13.279653   25747 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 22:00:13.279658   25747 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 22:00:13.279662   25747 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 22:00:13.279667   25747 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 22:00:13.279671   25747 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 22:00:13.279680   25747 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 22:00:13.279684   25747 command_runner.go:130] > # CNI plugins.
	I0914 22:00:13.279691   25747 command_runner.go:130] > [crio.network]
	I0914 22:00:13.279697   25747 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 22:00:13.279710   25747 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 22:00:13.279716   25747 command_runner.go:130] > # cni_default_network = ""
	I0914 22:00:13.279722   25747 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 22:00:13.279729   25747 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 22:00:13.279735   25747 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 22:00:13.279743   25747 command_runner.go:130] > # plugin_dirs = [
	I0914 22:00:13.279747   25747 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 22:00:13.279754   25747 command_runner.go:130] > # ]
	I0914 22:00:13.279761   25747 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 22:00:13.279768   25747 command_runner.go:130] > [crio.metrics]
	I0914 22:00:13.279773   25747 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 22:00:13.279780   25747 command_runner.go:130] > enable_metrics = true
	I0914 22:00:13.279784   25747 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 22:00:13.279792   25747 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 22:00:13.279798   25747 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 22:00:13.279807   25747 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 22:00:13.279815   25747 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 22:00:13.279822   25747 command_runner.go:130] > # metrics_collectors = [
	I0914 22:00:13.279826   25747 command_runner.go:130] > # 	"operations",
	I0914 22:00:13.279833   25747 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 22:00:13.279838   25747 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 22:00:13.279845   25747 command_runner.go:130] > # 	"operations_errors",
	I0914 22:00:13.279849   25747 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 22:00:13.279856   25747 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 22:00:13.279861   25747 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 22:00:13.279867   25747 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 22:00:13.279872   25747 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 22:00:13.279879   25747 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 22:00:13.279883   25747 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 22:00:13.279890   25747 command_runner.go:130] > # 	"containers_oom_total",
	I0914 22:00:13.279894   25747 command_runner.go:130] > # 	"containers_oom",
	I0914 22:00:13.279899   25747 command_runner.go:130] > # 	"processes_defunct",
	I0914 22:00:13.279911   25747 command_runner.go:130] > # 	"operations_total",
	I0914 22:00:13.279915   25747 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 22:00:13.279921   25747 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 22:00:13.279926   25747 command_runner.go:130] > # 	"operations_errors_total",
	I0914 22:00:13.279933   25747 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 22:00:13.279938   25747 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 22:00:13.279948   25747 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 22:00:13.279961   25747 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 22:00:13.279972   25747 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 22:00:13.279983   25747 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 22:00:13.279993   25747 command_runner.go:130] > # ]
	I0914 22:00:13.280007   25747 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 22:00:13.280020   25747 command_runner.go:130] > # metrics_port = 9090
	I0914 22:00:13.280032   25747 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 22:00:13.280043   25747 command_runner.go:130] > # metrics_socket = ""
	I0914 22:00:13.280056   25747 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 22:00:13.280070   25747 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 22:00:13.280085   25747 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 22:00:13.280097   25747 command_runner.go:130] > # certificate on any modification event.
	I0914 22:00:13.280107   25747 command_runner.go:130] > # metrics_cert = ""
	I0914 22:00:13.280115   25747 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 22:00:13.280121   25747 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 22:00:13.280128   25747 command_runner.go:130] > # metrics_key = ""
	I0914 22:00:13.280134   25747 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 22:00:13.280140   25747 command_runner.go:130] > [crio.tracing]
	I0914 22:00:13.280146   25747 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 22:00:13.280153   25747 command_runner.go:130] > # enable_tracing = false
	I0914 22:00:13.280159   25747 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 22:00:13.280166   25747 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 22:00:13.280171   25747 command_runner.go:130] > # Number of samples to collect per million spans.
	I0914 22:00:13.280178   25747 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 22:00:13.280184   25747 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 22:00:13.280191   25747 command_runner.go:130] > [crio.stats]
	I0914 22:00:13.280200   25747 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 22:00:13.280209   25747 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 22:00:13.280216   25747 command_runner.go:130] > # stats_collection_period = 0
	I0914 22:00:13.280253   25747 command_runner.go:130] ! time="2023-09-14 22:00:13.249904647Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0914 22:00:13.280266   25747 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 22:00:13.280323   25747 cni.go:84] Creating CNI manager for ""
	I0914 22:00:13.280332   25747 cni.go:136] 2 nodes found, recommending kindnet
	I0914 22:00:13.280343   25747 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:00:13.280363   25747 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-124911 NodeName:multinode-124911-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:00:13.280472   25747 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.254
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-124911-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:00:13.280550   25747 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-124911-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:00:13.280604   25747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:00:13.289123   25747 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
	I0914 22:00:13.289264   25747 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
	
	Initiating transfer...
	I0914 22:00:13.289325   25747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.1
	I0914 22:00:13.297518   25747 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl.sha256
	I0914 22:00:13.297537   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/linux/amd64/v1.28.1/kubectl -> /var/lib/minikube/binaries/v1.28.1/kubectl
	I0914 22:00:13.297614   25747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl
	I0914 22:00:13.297625   25747 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/linux/amd64/v1.28.1/kubelet
	I0914 22:00:13.297652   25747 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/linux/amd64/v1.28.1/kubeadm
	I0914 22:00:13.301230   25747 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
	I0914 22:00:13.301264   25747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
	I0914 22:00:13.301283   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/linux/amd64/v1.28.1/kubectl --> /var/lib/minikube/binaries/v1.28.1/kubectl (49864704 bytes)
	I0914 22:00:26.721813   25747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:00:26.737217   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/linux/amd64/v1.28.1/kubelet -> /var/lib/minikube/binaries/v1.28.1/kubelet
	I0914 22:00:26.737406   25747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet
	I0914 22:00:26.741339   25747 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
	I0914 22:00:26.741477   25747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
	I0914 22:00:26.741511   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/linux/amd64/v1.28.1/kubelet --> /var/lib/minikube/binaries/v1.28.1/kubelet (110764032 bytes)
	I0914 22:00:38.844432   25747 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/linux/amd64/v1.28.1/kubeadm -> /var/lib/minikube/binaries/v1.28.1/kubeadm
	I0914 22:00:38.844507   25747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm
	I0914 22:00:38.849245   25747 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
	I0914 22:00:38.849437   25747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
	I0914 22:00:38.849466   25747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/linux/amd64/v1.28.1/kubeadm --> /var/lib/minikube/binaries/v1.28.1/kubeadm (50749440 bytes)
	I0914 22:00:39.075320   25747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0914 22:00:39.085064   25747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0914 22:00:39.100698   25747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:00:39.116578   25747 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0914 22:00:39.120182   25747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:00:39.131194   25747 host.go:66] Checking if "multinode-124911" exists ...
	I0914 22:00:39.131480   25747 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:00:39.131627   25747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:00:39.131673   25747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:00:39.146252   25747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I0914 22:00:39.146658   25747 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:00:39.147105   25747 main.go:141] libmachine: Using API Version  1
	I0914 22:00:39.147126   25747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:00:39.147440   25747 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:00:39.147690   25747 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:00:39.147818   25747 start.go:304] JoinCluster: &{Name:multinode-124911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:00:39.147896   25747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0914 22:00:39.147911   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:00:39.150682   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:00:39.151063   25747 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:00:39.151087   25747 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:00:39.151251   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:00:39.151404   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:00:39.151558   25747 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:00:39.151680   25747 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 22:00:39.323918   25747 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token hwztr6.gfr54cxtsfyhfwt2 --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:00:39.323967   25747 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0914 22:00:39.323997   25747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hwztr6.gfr54cxtsfyhfwt2 --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-124911-m02"
	I0914 22:00:39.363498   25747 command_runner.go:130] > [preflight] Running pre-flight checks
	I0914 22:00:39.485925   25747 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0914 22:00:39.485960   25747 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0914 22:00:39.524418   25747 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:00:39.524449   25747 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:00:39.524460   25747 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0914 22:00:39.639907   25747 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0914 22:00:41.652620   25747 command_runner.go:130] > This node has joined the cluster:
	I0914 22:00:41.652648   25747 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0914 22:00:41.652659   25747 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0914 22:00:41.652670   25747 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0914 22:00:41.654537   25747 command_runner.go:130] ! W0914 22:00:39.355986     827 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0914 22:00:41.654563   25747 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:00:41.654587   25747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hwztr6.gfr54cxtsfyhfwt2 --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-124911-m02": (2.33057202s)
	I0914 22:00:41.654610   25747 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0914 22:00:41.909606   25747 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0914 22:00:41.909654   25747 start.go:306] JoinCluster complete in 2.76183636s
	I0914 22:00:41.909666   25747 cni.go:84] Creating CNI manager for ""
	I0914 22:00:41.909672   25747 cni.go:136] 2 nodes found, recommending kindnet
	I0914 22:00:41.909733   25747 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 22:00:41.914785   25747 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0914 22:00:41.914813   25747 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0914 22:00:41.914828   25747 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0914 22:00:41.914840   25747 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:00:41.914860   25747 command_runner.go:130] > Access: 2023-09-14 21:58:49.372289482 +0000
	I0914 22:00:41.914873   25747 command_runner.go:130] > Modify: 2023-09-13 23:09:37.000000000 +0000
	I0914 22:00:41.914882   25747 command_runner.go:130] > Change: 2023-09-14 21:58:47.705289482 +0000
	I0914 22:00:41.914894   25747 command_runner.go:130] >  Birth: -
	I0914 22:00:41.914943   25747 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 22:00:41.914958   25747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 22:00:41.933170   25747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 22:00:42.252838   25747 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0914 22:00:42.252868   25747 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0914 22:00:42.252877   25747 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0914 22:00:42.252886   25747 command_runner.go:130] > daemonset.apps/kindnet configured
	I0914 22:00:42.253266   25747 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:00:42.253514   25747 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:00:42.253835   25747 round_trippers.go:463] GET https://192.168.39.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 22:00:42.253847   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:42.253854   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:42.253860   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:42.255731   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:00:42.255747   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:42.255753   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:42.255758   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:42.255763   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:42.255768   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:42.255773   25747 round_trippers.go:580]     Content-Length: 291
	I0914 22:00:42.255778   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:42 GMT
	I0914 22:00:42.255783   25747 round_trippers.go:580]     Audit-Id: 4faa95a1-9f24-4e05-bfd8-12b71b245a42
	I0914 22:00:42.255802   25747 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d40ee9-9834-4f82-84c2-51e3c14c181f","resourceVersion":"416","creationTimestamp":"2023-09-14T21:59:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0914 22:00:42.255889   25747 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-124911" context rescaled to 1 replicas
	I0914 22:00:42.255921   25747 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0914 22:00:42.257849   25747 out.go:177] * Verifying Kubernetes components...
	I0914 22:00:42.259225   25747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:00:42.271846   25747 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:00:42.272140   25747 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:00:42.272439   25747 node_ready.go:35] waiting up to 6m0s for node "multinode-124911-m02" to be "Ready" ...
	I0914 22:00:42.272555   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:42.272569   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:42.272580   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:42.272590   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:42.278507   25747 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 22:00:42.278527   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:42.278535   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:42 GMT
	I0914 22:00:42.278542   25747 round_trippers.go:580]     Audit-Id: cf68611e-fd82-4b85-938c-9f569ac87c3f
	I0914 22:00:42.278548   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:42.278556   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:42.278564   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:42.278571   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:42.278583   25747 round_trippers.go:580]     Content-Length: 3531
	I0914 22:00:42.278731   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"486","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0914 22:00:42.279059   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:42.279077   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:42.279088   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:42.279097   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:42.281282   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:42.281296   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:42.281302   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:42.281308   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:42.281313   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:42.281318   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:42.281326   25747 round_trippers.go:580]     Content-Length: 3531
	I0914 22:00:42.281333   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:42 GMT
	I0914 22:00:42.281345   25747 round_trippers.go:580]     Audit-Id: a48c9c28-1d2a-4e46-906e-2b3781541900
	I0914 22:00:42.281497   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"486","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0914 22:00:42.782429   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:42.782453   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:42.782464   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:42.782477   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:42.785099   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:42.785122   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:42.785132   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:42.785141   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:42.785148   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:42.785157   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:42 GMT
	I0914 22:00:42.785164   25747 round_trippers.go:580]     Audit-Id: 9a7759e0-209a-4bba-a539-f5daff040f20
	I0914 22:00:42.785172   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:42.785181   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:42.785272   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:43.283074   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:43.283103   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:43.283116   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:43.283125   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:43.285888   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:43.285913   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:43.285924   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:43.285935   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:43 GMT
	I0914 22:00:43.285944   25747 round_trippers.go:580]     Audit-Id: 7de038c2-c29c-4acd-8b80-0efb7887741b
	I0914 22:00:43.285962   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:43.285971   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:43.285985   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:43.285998   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:43.286101   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:43.782774   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:43.782796   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:43.782804   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:43.782809   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:43.786082   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:43.786101   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:43.786108   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:43.786113   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:43.786119   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:43 GMT
	I0914 22:00:43.786124   25747 round_trippers.go:580]     Audit-Id: 034ab195-12e1-4c7b-ae23-2fbcb605ed69
	I0914 22:00:43.786131   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:43.786137   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:43.786142   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:43.786177   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:44.282273   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:44.282298   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:44.282311   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:44.282321   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:44.285774   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:44.285799   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:44.285807   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:44 GMT
	I0914 22:00:44.285815   25747 round_trippers.go:580]     Audit-Id: 3b4501a6-fdbe-4d9a-8926-06d6209a9d4e
	I0914 22:00:44.285824   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:44.285834   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:44.285843   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:44.285855   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:44.285864   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:44.285952   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:44.286240   25747 node_ready.go:58] node "multinode-124911-m02" has status "Ready":"False"
	I0914 22:00:44.782489   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:44.782525   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:44.782536   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:44.782548   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:44.786449   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:44.786503   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:44.786517   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:44.786526   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:44.786534   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:44.786546   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:44.786556   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:44.786566   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:44 GMT
	I0914 22:00:44.786573   25747 round_trippers.go:580]     Audit-Id: ad81b9b1-4086-4cc6-a7b1-b79b67c3aa87
	I0914 22:00:44.786666   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:45.282842   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:45.282864   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:45.282872   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:45.282883   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:45.285610   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:45.285633   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:45.285643   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:45.285651   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:45.285661   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:45.285674   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:45.285686   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:45 GMT
	I0914 22:00:45.285699   25747 round_trippers.go:580]     Audit-Id: 466dcb0e-ef24-4936-8532-1501816a439b
	I0914 22:00:45.285711   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:45.285802   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:45.781927   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:45.781957   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:45.781968   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:45.781978   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:45.785009   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:45.785035   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:45.785046   25747 round_trippers.go:580]     Audit-Id: 871de8da-c12b-4da9-b116-2e726ef5cd9d
	I0914 22:00:45.785055   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:45.785062   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:45.785070   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:45.785082   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:45.785094   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:45.785106   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:45 GMT
	I0914 22:00:45.785270   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:46.282674   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:46.282693   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:46.282701   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:46.282707   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:46.285951   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:46.285975   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:46.285984   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:46.285993   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:46.286002   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:46.286009   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:46.286015   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:46.286020   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:46 GMT
	I0914 22:00:46.286026   25747 round_trippers.go:580]     Audit-Id: 35343010-888e-4df8-afcf-62fba3b31b4f
	I0914 22:00:46.286119   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:46.286365   25747 node_ready.go:58] node "multinode-124911-m02" has status "Ready":"False"
	I0914 22:00:46.782892   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:46.782919   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:46.782927   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:46.782933   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:46.786352   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:46.786377   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:46.786386   25747 round_trippers.go:580]     Audit-Id: 50cd2d9b-294d-4ca8-bfec-d4f9df7c464f
	I0914 22:00:46.786394   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:46.786402   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:46.786409   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:46.786416   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:46.786423   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:46.786433   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:46 GMT
	I0914 22:00:46.786477   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:47.282169   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:47.282188   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:47.282196   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:47.282202   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:47.284904   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:47.284923   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:47.284933   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:47.284941   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:47 GMT
	I0914 22:00:47.284949   25747 round_trippers.go:580]     Audit-Id: 558a7f03-4fa3-4d2c-a26b-1af843efa71d
	I0914 22:00:47.284957   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:47.284968   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:47.284977   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:47.284987   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:47.285821   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:47.781976   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:47.782001   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:47.782009   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:47.782015   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:47.784749   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:47.784766   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:47.784772   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:47.784778   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:47.784784   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:47 GMT
	I0914 22:00:47.784792   25747 round_trippers.go:580]     Audit-Id: ee4cfd26-7223-41c0-b59f-861c75bd1410
	I0914 22:00:47.784800   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:47.784809   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:47.784818   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:47.784879   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:48.282463   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:48.282484   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:48.282492   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:48.282499   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:48.285294   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:48.285314   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:48.285321   25747 round_trippers.go:580]     Audit-Id: 9270bdea-e5c2-4297-84ad-9920f44a54f4
	I0914 22:00:48.285326   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:48.285331   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:48.285338   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:48.285345   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:48.285359   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:48.285371   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:48 GMT
	I0914 22:00:48.285483   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:48.781949   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:48.781982   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:48.781994   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:48.782004   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:48.784706   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:48.784736   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:48.784748   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:48 GMT
	I0914 22:00:48.784759   25747 round_trippers.go:580]     Audit-Id: b99cd426-b55b-4547-a343-5167a14a5b57
	I0914 22:00:48.784768   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:48.784778   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:48.784788   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:48.784799   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:48.784808   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:48.784901   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:48.785192   25747 node_ready.go:58] node "multinode-124911-m02" has status "Ready":"False"
	I0914 22:00:49.282016   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:49.282039   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:49.282051   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:49.282062   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:49.285530   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:49.285557   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:49.285568   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:49 GMT
	I0914 22:00:49.285577   25747 round_trippers.go:580]     Audit-Id: aba8252a-01c1-4f88-bd76-1575809d7b1e
	I0914 22:00:49.285585   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:49.285594   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:49.285603   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:49.285613   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:49.285630   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:49.285721   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:49.781955   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:49.781975   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:49.781983   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:49.781989   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:49.784662   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:49.784689   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:49.784699   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:49.784708   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:49 GMT
	I0914 22:00:49.784717   25747 round_trippers.go:580]     Audit-Id: 0e16fa74-f976-4624-b4f9-d4ef9afd4543
	I0914 22:00:49.784728   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:49.784739   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:49.784749   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:49.784760   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:49.784817   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:50.282012   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:50.282034   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:50.282042   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:50.282049   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:50.284876   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:50.284900   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:50.284908   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:50.284913   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:50.284919   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:50.284924   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:50.284930   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:50.284935   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:50 GMT
	I0914 22:00:50.284941   25747 round_trippers.go:580]     Audit-Id: de4bc40f-410f-4182-a597-54255492d331
	I0914 22:00:50.285010   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:50.782104   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:50.782123   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:50.782131   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:50.782136   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:50.784799   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:50.784815   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:50.784821   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:50.784826   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:50.784834   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:50.784839   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:50 GMT
	I0914 22:00:50.784844   25747 round_trippers.go:580]     Audit-Id: 76dce750-65fa-4319-a103-62af98b85045
	I0914 22:00:50.784849   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:50.784854   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:50.784915   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:51.282496   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:51.282516   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.282524   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.282533   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.285013   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:51.285032   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.285041   25747 round_trippers.go:580]     Content-Length: 3640
	I0914 22:00:51.285048   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.285057   25747 round_trippers.go:580]     Audit-Id: 9662ff46-50a3-4399-8b30-fbf32c2157c1
	I0914 22:00:51.285065   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.285074   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.285083   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.285090   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.285154   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"492","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0914 22:00:51.285370   25747 node_ready.go:58] node "multinode-124911-m02" has status "Ready":"False"
	I0914 22:00:51.782784   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:51.782805   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.782813   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.782819   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.785297   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:51.785314   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.785320   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.785325   25747 round_trippers.go:580]     Audit-Id: f0c4a775-ca8a-43fa-864f-a177954f845d
	I0914 22:00:51.785330   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.785340   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.785345   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.785355   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.785366   25747 round_trippers.go:580]     Content-Length: 3726
	I0914 22:00:51.785430   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"515","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I0914 22:00:51.785643   25747 node_ready.go:49] node "multinode-124911-m02" has status "Ready":"True"
	I0914 22:00:51.785657   25747 node_ready.go:38] duration metric: took 9.513200226s waiting for node "multinode-124911-m02" to be "Ready" ...
	I0914 22:00:51.785664   25747 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:00:51.785709   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 22:00:51.785716   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.785722   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.785730   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.788701   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:51.788719   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.788727   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.788733   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.788738   25747 round_trippers.go:580]     Audit-Id: f9103fc8-acf3-460b-87fe-b2988ed1b7e4
	I0914 22:00:51.788744   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.788752   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.788759   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.789872   25747 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"516"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"412","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67372 chars]
	I0914 22:00:51.791877   25747 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:51.791952   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:00:51.791963   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.791974   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.791985   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.794011   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:51.794028   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.794035   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.794040   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.794045   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.794051   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.794059   25747 round_trippers.go:580]     Audit-Id: 53ccaf5f-9e8d-493c-b906-e2828c1b441c
	I0914 22:00:51.794064   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.794307   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"412","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0914 22:00:51.794692   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:00:51.794706   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.794716   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.794724   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.796371   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:00:51.796388   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.796398   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.796404   25747 round_trippers.go:580]     Audit-Id: f5826272-2c04-495e-80bb-d43ede76e477
	I0914 22:00:51.796409   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.796414   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.796419   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.796425   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.796638   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"423","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6196 chars]
	I0914 22:00:51.796881   25747 pod_ready.go:92] pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace has status "Ready":"True"
	I0914 22:00:51.796894   25747 pod_ready.go:81] duration metric: took 4.999894ms waiting for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:51.796901   25747 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:51.796943   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-124911
	I0914 22:00:51.796950   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.796957   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.796963   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.798677   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:00:51.798691   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.798697   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.798702   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.798707   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.798713   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.798725   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.798738   25747 round_trippers.go:580]     Audit-Id: 1a6cf8f3-0b63-4261-9618-ec3c55ad0bf6
	I0914 22:00:51.799015   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-124911","namespace":"kube-system","uid":"1b195f1a-48a6-4b46-a819-2aeb9fe4e00c","resourceVersion":"382","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.116:2379","kubernetes.io/config.hash":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.mirror":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.seen":"2023-09-14T21:59:20.641783376Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0914 22:00:51.799401   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:00:51.799416   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.799426   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.799436   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.801007   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:00:51.801024   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.801033   25747 round_trippers.go:580]     Audit-Id: d342f3e7-26e1-4619-9a62-a98656db27cc
	I0914 22:00:51.801041   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.801051   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.801061   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.801067   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.801072   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.801197   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"423","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6196 chars]
	I0914 22:00:51.801480   25747 pod_ready.go:92] pod "etcd-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:00:51.801493   25747 pod_ready.go:81] duration metric: took 4.587931ms waiting for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:51.801506   25747 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:51.801557   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124911
	I0914 22:00:51.801569   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.801579   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.801588   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.803146   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:00:51.803156   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.803162   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.803167   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.803174   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.803186   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.803200   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.803208   25747 round_trippers.go:580]     Audit-Id: 6ce48c50-076b-4a35-843f-9d942f24e03b
	I0914 22:00:51.803389   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-124911","namespace":"kube-system","uid":"e9a93d33-82f3-4cfe-9b2c-92560dd09d09","resourceVersion":"383","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.116:8443","kubernetes.io/config.hash":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.mirror":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.seen":"2023-09-14T21:59:20.641778793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0914 22:00:51.803733   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:00:51.803745   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.803752   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.803757   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.805565   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:00:51.805576   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.805587   25747 round_trippers.go:580]     Audit-Id: 679691e8-327f-483d-aeab-c70ec742c3fc
	I0914 22:00:51.805595   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.805603   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.805611   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.805627   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.805636   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.805945   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"423","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6196 chars]
	I0914 22:00:51.806224   25747 pod_ready.go:92] pod "kube-apiserver-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:00:51.806237   25747 pod_ready.go:81] duration metric: took 4.724299ms waiting for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:51.806244   25747 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:51.806290   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124911
	I0914 22:00:51.806300   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.806306   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.806312   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.807915   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:00:51.807927   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.807936   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.807942   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.807947   25747 round_trippers.go:580]     Audit-Id: e65c4ddb-1f03-47f0-92d7-d0331b7e176b
	I0914 22:00:51.807952   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.807959   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.807970   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.808325   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-124911","namespace":"kube-system","uid":"3efae123-9cdd-457a-a317-77370a6c5288","resourceVersion":"384","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.mirror":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.seen":"2023-09-14T21:59:20.641781682Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0914 22:00:51.808620   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:00:51.808629   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.808636   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.808641   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.810111   25747 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:00:51.810123   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.810130   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.810136   25747 round_trippers.go:580]     Audit-Id: dca15d16-a0a5-42ec-ba76-4b6242fe8d70
	I0914 22:00:51.810141   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.810146   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.810152   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.810160   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.810301   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"423","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6196 chars]
	I0914 22:00:51.810562   25747 pod_ready.go:92] pod "kube-controller-manager-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:00:51.810573   25747 pod_ready.go:81] duration metric: took 4.324312ms waiting for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:51.810581   25747 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:51.982895   25747 request.go:629] Waited for 172.254334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kd4p
	I0914 22:00:51.982954   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kd4p
	I0914 22:00:51.982959   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:51.982966   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:51.982972   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:51.986249   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:51.986272   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:51.986282   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:51 GMT
	I0914 22:00:51.986291   25747 round_trippers.go:580]     Audit-Id: d8ffd908-fb01-4932-a118-2a6d68a7f3c5
	I0914 22:00:51.986300   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:51.986309   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:51.986320   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:51.986325   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:51.986489   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2kd4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"de9e2ee3-364a-447b-9d7f-be85d86838ae","resourceVersion":"375","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0914 22:00:52.183278   25747 request.go:629] Waited for 196.360419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:00:52.183342   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:00:52.183347   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:52.183354   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:52.183360   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:52.185942   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:52.185969   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:52.185979   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:52.185987   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:52 GMT
	I0914 22:00:52.185994   25747 round_trippers.go:580]     Audit-Id: daeb1fd7-2ed0-4ef7-b287-c58aaa82b783
	I0914 22:00:52.186002   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:52.186010   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:52.186018   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:52.186346   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"423","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6196 chars]
	I0914 22:00:52.186877   25747 pod_ready.go:92] pod "kube-proxy-2kd4p" in "kube-system" namespace has status "Ready":"True"
	I0914 22:00:52.186901   25747 pod_ready.go:81] duration metric: took 376.313836ms waiting for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:52.186916   25747 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4qjg" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:52.382963   25747 request.go:629] Waited for 195.962833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4qjg
	I0914 22:00:52.383036   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4qjg
	I0914 22:00:52.383045   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:52.383055   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:52.383066   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:52.386119   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:52.386142   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:52.386151   25747 round_trippers.go:580]     Audit-Id: 6fb9848a-0a18-4343-b70b-bd3f94a60f30
	I0914 22:00:52.386160   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:52.386169   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:52.386177   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:52.386184   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:52.386196   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:52 GMT
	I0914 22:00:52.386366   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4qjg","generateName":"kube-proxy-","namespace":"kube-system","uid":"8214b42e-6656-4e01-bc47-82d6ab6592e5","resourceVersion":"501","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0914 22:00:52.583123   25747 request.go:629] Waited for 196.313839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:52.583190   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:00:52.583196   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:52.583209   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:52.583218   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:52.585820   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:52.585843   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:52.585853   25747 round_trippers.go:580]     Audit-Id: 3b1102aa-772e-46fd-8c94-1fc08748ca86
	I0914 22:00:52.585862   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:52.585870   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:52.585878   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:52.585886   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:52.585893   25747 round_trippers.go:580]     Content-Length: 3726
	I0914 22:00:52.585909   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:52 GMT
	I0914 22:00:52.586005   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"515","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I0914 22:00:52.586308   25747 pod_ready.go:92] pod "kube-proxy-c4qjg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:00:52.586334   25747 pod_ready.go:81] duration metric: took 399.40691ms waiting for pod "kube-proxy-c4qjg" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:52.586347   25747 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:52.783697   25747 request.go:629] Waited for 197.267844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 22:00:52.783762   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 22:00:52.783774   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:52.783784   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:52.783794   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:52.786473   25747 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:00:52.786496   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:52.786506   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:52 GMT
	I0914 22:00:52.786514   25747 round_trippers.go:580]     Audit-Id: 20a39776-c34a-4e8f-be7c-e5b83fad236b
	I0914 22:00:52.786522   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:52.786530   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:52.786538   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:52.786550   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:52.787351   25747 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-124911","namespace":"kube-system","uid":"f8d502b7-9ee7-474e-ab64-9f721ee6970e","resourceVersion":"360","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.mirror":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.seen":"2023-09-14T21:59:20.641782607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0914 22:00:52.983030   25747 request.go:629] Waited for 195.288755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:00:52.983114   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:00:52.983121   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:52.983133   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:52.983143   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:52.986233   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:52.986259   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:52.986269   25747 round_trippers.go:580]     Audit-Id: 8c8f8f0c-d14f-46b7-9971-c2b18b8d0092
	I0914 22:00:52.986277   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:52.986285   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:52.986293   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:52.986302   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:52.986309   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:52 GMT
	I0914 22:00:52.986509   25747 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"423","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6196 chars]
	I0914 22:00:52.986856   25747 pod_ready.go:92] pod "kube-scheduler-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:00:52.986871   25747 pod_ready.go:81] duration metric: took 400.517118ms waiting for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:00:52.986886   25747 pod_ready.go:38] duration metric: took 1.201211075s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:00:52.986903   25747 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:00:52.986951   25747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:00:52.999538   25747 system_svc.go:56] duration metric: took 12.626934ms WaitForService to wait for kubelet.
	I0914 22:00:52.999561   25747 kubeadm.go:581] duration metric: took 10.743614619s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:00:52.999577   25747 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:00:53.182926   25747 request.go:629] Waited for 183.292245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes
	I0914 22:00:53.182985   25747 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes
	I0914 22:00:53.182994   25747 round_trippers.go:469] Request Headers:
	I0914 22:00:53.183006   25747 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:00:53.183017   25747 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:00:53.186434   25747 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:00:53.186467   25747 round_trippers.go:577] Response Headers:
	I0914 22:00:53.186476   25747 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:00:53.186484   25747 round_trippers.go:580]     Content-Type: application/json
	I0914 22:00:53.186491   25747 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:00:53.186498   25747 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:00:53.186504   25747 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:00:53 GMT
	I0914 22:00:53.186512   25747 round_trippers.go:580]     Audit-Id: eddc2688-712b-4aef-bcc5-decd7411c312
	I0914 22:00:53.186785   25747 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"518"},"items":[{"metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"423","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9823 chars]
	I0914 22:00:53.187257   25747 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:00:53.187275   25747 node_conditions.go:123] node cpu capacity is 2
	I0914 22:00:53.187284   25747 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:00:53.187288   25747 node_conditions.go:123] node cpu capacity is 2
	I0914 22:00:53.187292   25747 node_conditions.go:105] duration metric: took 187.711185ms to run NodePressure ...
	I0914 22:00:53.187302   25747 start.go:228] waiting for startup goroutines ...
	I0914 22:00:53.187327   25747 start.go:242] writing updated cluster config ...
	I0914 22:00:53.187613   25747 ssh_runner.go:195] Run: rm -f paused
	I0914 22:00:53.233741   25747 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:00:53.235932   25747 out.go:177] * Done! kubectl is now configured to use "multinode-124911" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 21:58:48 UTC, ends at Thu 2023-09-14 22:01:01 UTC. --
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.065523744Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3baad6a6694a847debf3a2237fa20426f2f1329dc51bfd222832417675b1bb99,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-pmkvp,Uid:854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728854288556238,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:00:53.956198526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a26d1a18d254ce3042e4f49c9bc1ac8b5204065d20b7e5fd404d0fd78295f8a4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:aada9d30-e15d-4405-a7e2-e979dd4b8e0d,Namespace:kube-system,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1694728782034143527,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/
tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-14T21:59:41.692999273Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b81063a39c271b03a97e172ac4cd3eecd23b160b05da59377b6f0c5ef658f687,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-ssj9q,Uid:aadacae8-9f4d-4c24-91c7-78a88d187f73,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728782012675121,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T21:59:41.681165046Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be7955bd798a99e23866b24f79ea922825877eca5ca5586b6d00e6b70e9c5dbd,Metadata:&PodSandboxMetadata{Name:kindnet-274xj,Uid:6d12f7c0-2ad9-436f-ab5d-528c4823865c,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1694728773734616035,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T21:59:33.400484429Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:175d3142c242c16316105340b68f7492fae398b2372a7faf73a0148fc2d0ea2b,Metadata:&PodSandboxMetadata{Name:kube-proxy-2kd4p,Uid:de9e2ee3-364a-447b-9d7f-be85d86838ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728773661430687,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9e2ee3-364a-447b-9d7f-be85d86838ae,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T21:59:33.331288937Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39f40ea835b343bf401d13c39c68d891e5a26b2b81ac3d75a69f6d7d15111cd0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-124911,Uid:0364c35ea02d584f30ca0c3d8a47dfb6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728752872404232,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0364c35ea02d584f30ca0c3d8a47dfb6,kubernetes.io/config.seen: 2023-09-14T21:59:12.349242215Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:95c167f92ec62b0e0977b4ea7c286f7138262574ff78ec1b92dd3177b25d2b68,Metadata:&PodSandboxMetada
ta{Name:kube-scheduler-multinode-124911,Uid:1c19e8d6787ee446a44e05a606bee863,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728752861749359,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19e8d6787ee446a44e05a606bee863,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1c19e8d6787ee446a44e05a606bee863,kubernetes.io/config.seen: 2023-09-14T21:59:12.349243005Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:18b009bd8de43f34c0457f4be50305c51b45f4b5752dd71d8bc5a2d02b6c7f87,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-124911,Uid:45ad3e9dc71d2c9a455002dbdc235854,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728752841817036,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-
124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.116:8443,kubernetes.io/config.hash: 45ad3e9dc71d2c9a455002dbdc235854,kubernetes.io/config.seen: 2023-09-14T21:59:12.349241043Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fb37d64d243571722f1f3fd42b47689c727a7b1d3187e4b37b81cbd921cc81a4,Metadata:&PodSandboxMetadata{Name:etcd-multinode-124911,Uid:87beacc0664a01f1abb8543be732cb2e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694728752828569318,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.116:2379,kubern
etes.io/config.hash: 87beacc0664a01f1abb8543be732cb2e,kubernetes.io/config.seen: 2023-09-14T21:59:12.349237542Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=e8455c7c-6b00-461e-8355-8daf047fc122 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.066771986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c4ceca2f-143e-41d3-87ec-ba0831f8d99f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.066861148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c4ceca2f-143e-41d3-87ec-ba0831f8d99f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.067163535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d130ac431feeec022c21f8fc59f6e240654c21319a69301f142642ce93647602,PodSandboxId:3baad6a6694a847debf3a2237fa20426f2f1329dc51bfd222832417675b1bb99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694728857591524895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef3fb15921cfb1889375b07e9a910061ff6657eaf2c7e1c9c7fdfcf8d8728f6,PodSandboxId:b81063a39c271b03a97e172ac4cd3eecd23b160b05da59377b6f0c5ef658f687,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694728782652510013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5fe30b5ca2b889b9a9ee1a38918a0453f9c3b4a93706bd6f2273dc7329f88a,PodSandboxId:a26d1a18d254ce3042e4f49c9bc1ac8b5204065d20b7e5fd404d0fd78295f8a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728782455465987,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4670f5aa85146115210955cc92845f2f4e5773168527211735bb4fd959e716b9,PodSandboxId:be7955bd798a99e23866b24f79ea922825877eca5ca5586b6d00e6b70e9c5dbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694728780432668064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5bf4347ec24a88b9aeeab649d2f3e60d8285d423ed6e4f4a22570eeef70a8f,PodSandboxId:175d3142c242c16316105340b68f7492fae398b2372a7faf73a0148fc2d0ea2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694728774611593238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8704cb761b531c2d6b1b9578d6b00ffe357b86d7e52efca4fecb89c54f28510a,PodSandboxId:95c167f92ec62b0e0977b4ea7c286f7138262574ff78ec1b92dd3177b25d2b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694728753855141821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19
e8d6787ee446a44e05a606bee863,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcaa33ed3678c3c412fc43661f376cd54e31e7c08c2c23c4c1c80b4e9efce41,PodSandboxId:fb37d64d243571722f1f3fd42b47689c727a7b1d3187e4b37b81cbd921cc81a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694728753573942035,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56,PodSandboxId:18b009bd8de43f34c0457f4be50305c51b45f4b5752dd71d8bc5a2d02b6c7f87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694728753413595031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations:map[string]string{io.
kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8e6a274583199387b45f130c8b4fd03aeae83b0803151dec7dff05cd3a0449,PodSandboxId:39f40ea835b343bf401d13c39c68d891e5a26b2b81ac3d75a69f6d7d15111cd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694728753285274666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c4ceca2f-143e-41d3-87ec-ba0831f8d99f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.158807210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4b083f92-5e51-474e-8794-21f7090ac8a7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.158891353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4b083f92-5e51-474e-8794-21f7090ac8a7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.159115280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d130ac431feeec022c21f8fc59f6e240654c21319a69301f142642ce93647602,PodSandboxId:3baad6a6694a847debf3a2237fa20426f2f1329dc51bfd222832417675b1bb99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694728857591524895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef3fb15921cfb1889375b07e9a910061ff6657eaf2c7e1c9c7fdfcf8d8728f6,PodSandboxId:b81063a39c271b03a97e172ac4cd3eecd23b160b05da59377b6f0c5ef658f687,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694728782652510013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5fe30b5ca2b889b9a9ee1a38918a0453f9c3b4a93706bd6f2273dc7329f88a,PodSandboxId:a26d1a18d254ce3042e4f49c9bc1ac8b5204065d20b7e5fd404d0fd78295f8a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728782455465987,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4670f5aa85146115210955cc92845f2f4e5773168527211735bb4fd959e716b9,PodSandboxId:be7955bd798a99e23866b24f79ea922825877eca5ca5586b6d00e6b70e9c5dbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694728780432668064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5bf4347ec24a88b9aeeab649d2f3e60d8285d423ed6e4f4a22570eeef70a8f,PodSandboxId:175d3142c242c16316105340b68f7492fae398b2372a7faf73a0148fc2d0ea2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694728774611593238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8704cb761b531c2d6b1b9578d6b00ffe357b86d7e52efca4fecb89c54f28510a,PodSandboxId:95c167f92ec62b0e0977b4ea7c286f7138262574ff78ec1b92dd3177b25d2b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694728753855141821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19
e8d6787ee446a44e05a606bee863,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcaa33ed3678c3c412fc43661f376cd54e31e7c08c2c23c4c1c80b4e9efce41,PodSandboxId:fb37d64d243571722f1f3fd42b47689c727a7b1d3187e4b37b81cbd921cc81a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694728753573942035,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56,PodSandboxId:18b009bd8de43f34c0457f4be50305c51b45f4b5752dd71d8bc5a2d02b6c7f87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694728753413595031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations:map[string]string{io.
kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8e6a274583199387b45f130c8b4fd03aeae83b0803151dec7dff05cd3a0449,PodSandboxId:39f40ea835b343bf401d13c39c68d891e5a26b2b81ac3d75a69f6d7d15111cd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694728753285274666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4b083f92-5e51-474e-8794-21f7090ac8a7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.195995767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=13c74e7c-843e-4a07-80ae-f663020075c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.196110533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=13c74e7c-843e-4a07-80ae-f663020075c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.196461879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d130ac431feeec022c21f8fc59f6e240654c21319a69301f142642ce93647602,PodSandboxId:3baad6a6694a847debf3a2237fa20426f2f1329dc51bfd222832417675b1bb99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694728857591524895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef3fb15921cfb1889375b07e9a910061ff6657eaf2c7e1c9c7fdfcf8d8728f6,PodSandboxId:b81063a39c271b03a97e172ac4cd3eecd23b160b05da59377b6f0c5ef658f687,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694728782652510013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5fe30b5ca2b889b9a9ee1a38918a0453f9c3b4a93706bd6f2273dc7329f88a,PodSandboxId:a26d1a18d254ce3042e4f49c9bc1ac8b5204065d20b7e5fd404d0fd78295f8a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728782455465987,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4670f5aa85146115210955cc92845f2f4e5773168527211735bb4fd959e716b9,PodSandboxId:be7955bd798a99e23866b24f79ea922825877eca5ca5586b6d00e6b70e9c5dbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694728780432668064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5bf4347ec24a88b9aeeab649d2f3e60d8285d423ed6e4f4a22570eeef70a8f,PodSandboxId:175d3142c242c16316105340b68f7492fae398b2372a7faf73a0148fc2d0ea2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694728774611593238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8704cb761b531c2d6b1b9578d6b00ffe357b86d7e52efca4fecb89c54f28510a,PodSandboxId:95c167f92ec62b0e0977b4ea7c286f7138262574ff78ec1b92dd3177b25d2b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694728753855141821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19
e8d6787ee446a44e05a606bee863,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcaa33ed3678c3c412fc43661f376cd54e31e7c08c2c23c4c1c80b4e9efce41,PodSandboxId:fb37d64d243571722f1f3fd42b47689c727a7b1d3187e4b37b81cbd921cc81a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694728753573942035,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56,PodSandboxId:18b009bd8de43f34c0457f4be50305c51b45f4b5752dd71d8bc5a2d02b6c7f87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694728753413595031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations:map[string]string{io.
kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8e6a274583199387b45f130c8b4fd03aeae83b0803151dec7dff05cd3a0449,PodSandboxId:39f40ea835b343bf401d13c39c68d891e5a26b2b81ac3d75a69f6d7d15111cd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694728753285274666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=13c74e7c-843e-4a07-80ae-f663020075c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.231609409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d3eeb4a2-00d8-446c-9825-c29c20f3fb72 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.231788895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d3eeb4a2-00d8-446c-9825-c29c20f3fb72 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.232069108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d130ac431feeec022c21f8fc59f6e240654c21319a69301f142642ce93647602,PodSandboxId:3baad6a6694a847debf3a2237fa20426f2f1329dc51bfd222832417675b1bb99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694728857591524895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef3fb15921cfb1889375b07e9a910061ff6657eaf2c7e1c9c7fdfcf8d8728f6,PodSandboxId:b81063a39c271b03a97e172ac4cd3eecd23b160b05da59377b6f0c5ef658f687,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694728782652510013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5fe30b5ca2b889b9a9ee1a38918a0453f9c3b4a93706bd6f2273dc7329f88a,PodSandboxId:a26d1a18d254ce3042e4f49c9bc1ac8b5204065d20b7e5fd404d0fd78295f8a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728782455465987,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4670f5aa85146115210955cc92845f2f4e5773168527211735bb4fd959e716b9,PodSandboxId:be7955bd798a99e23866b24f79ea922825877eca5ca5586b6d00e6b70e9c5dbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694728780432668064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5bf4347ec24a88b9aeeab649d2f3e60d8285d423ed6e4f4a22570eeef70a8f,PodSandboxId:175d3142c242c16316105340b68f7492fae398b2372a7faf73a0148fc2d0ea2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694728774611593238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8704cb761b531c2d6b1b9578d6b00ffe357b86d7e52efca4fecb89c54f28510a,PodSandboxId:95c167f92ec62b0e0977b4ea7c286f7138262574ff78ec1b92dd3177b25d2b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694728753855141821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19
e8d6787ee446a44e05a606bee863,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcaa33ed3678c3c412fc43661f376cd54e31e7c08c2c23c4c1c80b4e9efce41,PodSandboxId:fb37d64d243571722f1f3fd42b47689c727a7b1d3187e4b37b81cbd921cc81a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694728753573942035,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56,PodSandboxId:18b009bd8de43f34c0457f4be50305c51b45f4b5752dd71d8bc5a2d02b6c7f87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694728753413595031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations:map[string]string{io.
kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8e6a274583199387b45f130c8b4fd03aeae83b0803151dec7dff05cd3a0449,PodSandboxId:39f40ea835b343bf401d13c39c68d891e5a26b2b81ac3d75a69f6d7d15111cd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694728753285274666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d3eeb4a2-00d8-446c-9825-c29c20f3fb72 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.268706902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=99cea244-a7a2-4449-80d6-d29564474c0b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.268814314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=99cea244-a7a2-4449-80d6-d29564474c0b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.269080793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d130ac431feeec022c21f8fc59f6e240654c21319a69301f142642ce93647602,PodSandboxId:3baad6a6694a847debf3a2237fa20426f2f1329dc51bfd222832417675b1bb99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694728857591524895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef3fb15921cfb1889375b07e9a910061ff6657eaf2c7e1c9c7fdfcf8d8728f6,PodSandboxId:b81063a39c271b03a97e172ac4cd3eecd23b160b05da59377b6f0c5ef658f687,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694728782652510013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5fe30b5ca2b889b9a9ee1a38918a0453f9c3b4a93706bd6f2273dc7329f88a,PodSandboxId:a26d1a18d254ce3042e4f49c9bc1ac8b5204065d20b7e5fd404d0fd78295f8a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728782455465987,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4670f5aa85146115210955cc92845f2f4e5773168527211735bb4fd959e716b9,PodSandboxId:be7955bd798a99e23866b24f79ea922825877eca5ca5586b6d00e6b70e9c5dbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694728780432668064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5bf4347ec24a88b9aeeab649d2f3e60d8285d423ed6e4f4a22570eeef70a8f,PodSandboxId:175d3142c242c16316105340b68f7492fae398b2372a7faf73a0148fc2d0ea2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694728774611593238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8704cb761b531c2d6b1b9578d6b00ffe357b86d7e52efca4fecb89c54f28510a,PodSandboxId:95c167f92ec62b0e0977b4ea7c286f7138262574ff78ec1b92dd3177b25d2b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694728753855141821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19
e8d6787ee446a44e05a606bee863,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcaa33ed3678c3c412fc43661f376cd54e31e7c08c2c23c4c1c80b4e9efce41,PodSandboxId:fb37d64d243571722f1f3fd42b47689c727a7b1d3187e4b37b81cbd921cc81a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694728753573942035,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56,PodSandboxId:18b009bd8de43f34c0457f4be50305c51b45f4b5752dd71d8bc5a2d02b6c7f87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694728753413595031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations:map[string]string{io.
kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8e6a274583199387b45f130c8b4fd03aeae83b0803151dec7dff05cd3a0449,PodSandboxId:39f40ea835b343bf401d13c39c68d891e5a26b2b81ac3d75a69f6d7d15111cd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694728753285274666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=99cea244-a7a2-4449-80d6-d29564474c0b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.307515329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=22497c58-2180-46c5-bf11-2419fa7b2667 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.307606276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=22497c58-2180-46c5-bf11-2419fa7b2667 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.307807660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d130ac431feeec022c21f8fc59f6e240654c21319a69301f142642ce93647602,PodSandboxId:3baad6a6694a847debf3a2237fa20426f2f1329dc51bfd222832417675b1bb99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694728857591524895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef3fb15921cfb1889375b07e9a910061ff6657eaf2c7e1c9c7fdfcf8d8728f6,PodSandboxId:b81063a39c271b03a97e172ac4cd3eecd23b160b05da59377b6f0c5ef658f687,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694728782652510013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5fe30b5ca2b889b9a9ee1a38918a0453f9c3b4a93706bd6f2273dc7329f88a,PodSandboxId:a26d1a18d254ce3042e4f49c9bc1ac8b5204065d20b7e5fd404d0fd78295f8a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728782455465987,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4670f5aa85146115210955cc92845f2f4e5773168527211735bb4fd959e716b9,PodSandboxId:be7955bd798a99e23866b24f79ea922825877eca5ca5586b6d00e6b70e9c5dbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694728780432668064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5bf4347ec24a88b9aeeab649d2f3e60d8285d423ed6e4f4a22570eeef70a8f,PodSandboxId:175d3142c242c16316105340b68f7492fae398b2372a7faf73a0148fc2d0ea2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694728774611593238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8704cb761b531c2d6b1b9578d6b00ffe357b86d7e52efca4fecb89c54f28510a,PodSandboxId:95c167f92ec62b0e0977b4ea7c286f7138262574ff78ec1b92dd3177b25d2b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694728753855141821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19
e8d6787ee446a44e05a606bee863,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcaa33ed3678c3c412fc43661f376cd54e31e7c08c2c23c4c1c80b4e9efce41,PodSandboxId:fb37d64d243571722f1f3fd42b47689c727a7b1d3187e4b37b81cbd921cc81a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694728753573942035,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56,PodSandboxId:18b009bd8de43f34c0457f4be50305c51b45f4b5752dd71d8bc5a2d02b6c7f87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694728753413595031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations:map[string]string{io.
kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8e6a274583199387b45f130c8b4fd03aeae83b0803151dec7dff05cd3a0449,PodSandboxId:39f40ea835b343bf401d13c39c68d891e5a26b2b81ac3d75a69f6d7d15111cd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694728753285274666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=22497c58-2180-46c5-bf11-2419fa7b2667 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.347043627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=046a7a79-a1c5-4a76-aa5c-e580f595ba0b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.347133432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=046a7a79-a1c5-4a76-aa5c-e580f595ba0b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.347465729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d130ac431feeec022c21f8fc59f6e240654c21319a69301f142642ce93647602,PodSandboxId:3baad6a6694a847debf3a2237fa20426f2f1329dc51bfd222832417675b1bb99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694728857591524895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef3fb15921cfb1889375b07e9a910061ff6657eaf2c7e1c9c7fdfcf8d8728f6,PodSandboxId:b81063a39c271b03a97e172ac4cd3eecd23b160b05da59377b6f0c5ef658f687,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694728782652510013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5fe30b5ca2b889b9a9ee1a38918a0453f9c3b4a93706bd6f2273dc7329f88a,PodSandboxId:a26d1a18d254ce3042e4f49c9bc1ac8b5204065d20b7e5fd404d0fd78295f8a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728782455465987,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4670f5aa85146115210955cc92845f2f4e5773168527211735bb4fd959e716b9,PodSandboxId:be7955bd798a99e23866b24f79ea922825877eca5ca5586b6d00e6b70e9c5dbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694728780432668064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5bf4347ec24a88b9aeeab649d2f3e60d8285d423ed6e4f4a22570eeef70a8f,PodSandboxId:175d3142c242c16316105340b68f7492fae398b2372a7faf73a0148fc2d0ea2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694728774611593238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8704cb761b531c2d6b1b9578d6b00ffe357b86d7e52efca4fecb89c54f28510a,PodSandboxId:95c167f92ec62b0e0977b4ea7c286f7138262574ff78ec1b92dd3177b25d2b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694728753855141821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19
e8d6787ee446a44e05a606bee863,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcaa33ed3678c3c412fc43661f376cd54e31e7c08c2c23c4c1c80b4e9efce41,PodSandboxId:fb37d64d243571722f1f3fd42b47689c727a7b1d3187e4b37b81cbd921cc81a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694728753573942035,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56,PodSandboxId:18b009bd8de43f34c0457f4be50305c51b45f4b5752dd71d8bc5a2d02b6c7f87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694728753413595031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations:map[string]string{io.
kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8e6a274583199387b45f130c8b4fd03aeae83b0803151dec7dff05cd3a0449,PodSandboxId:39f40ea835b343bf401d13c39c68d891e5a26b2b81ac3d75a69f6d7d15111cd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694728753285274666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=046a7a79-a1c5-4a76-aa5c-e580f595ba0b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.391810886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=933b9baf-2489-4e5b-8543-b2e0949183ec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.391917596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=933b9baf-2489-4e5b-8543-b2e0949183ec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:01:01 multinode-124911 crio[719]: time="2023-09-14 22:01:01.392203020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d130ac431feeec022c21f8fc59f6e240654c21319a69301f142642ce93647602,PodSandboxId:3baad6a6694a847debf3a2237fa20426f2f1329dc51bfd222832417675b1bb99,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694728857591524895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef3fb15921cfb1889375b07e9a910061ff6657eaf2c7e1c9c7fdfcf8d8728f6,PodSandboxId:b81063a39c271b03a97e172ac4cd3eecd23b160b05da59377b6f0c5ef658f687,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694728782652510013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5fe30b5ca2b889b9a9ee1a38918a0453f9c3b4a93706bd6f2273dc7329f88a,PodSandboxId:a26d1a18d254ce3042e4f49c9bc1ac8b5204065d20b7e5fd404d0fd78295f8a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694728782455465987,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4670f5aa85146115210955cc92845f2f4e5773168527211735bb4fd959e716b9,PodSandboxId:be7955bd798a99e23866b24f79ea922825877eca5ca5586b6d00e6b70e9c5dbd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694728780432668064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5bf4347ec24a88b9aeeab649d2f3e60d8285d423ed6e4f4a22570eeef70a8f,PodSandboxId:175d3142c242c16316105340b68f7492fae398b2372a7faf73a0148fc2d0ea2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694728774611593238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8704cb761b531c2d6b1b9578d6b00ffe357b86d7e52efca4fecb89c54f28510a,PodSandboxId:95c167f92ec62b0e0977b4ea7c286f7138262574ff78ec1b92dd3177b25d2b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694728753855141821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19
e8d6787ee446a44e05a606bee863,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcaa33ed3678c3c412fc43661f376cd54e31e7c08c2c23c4c1c80b4e9efce41,PodSandboxId:fb37d64d243571722f1f3fd42b47689c727a7b1d3187e4b37b81cbd921cc81a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694728753573942035,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations:map[strin
g]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56,PodSandboxId:18b009bd8de43f34c0457f4be50305c51b45f4b5752dd71d8bc5a2d02b6c7f87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694728753413595031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations:map[string]string{io.
kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa8e6a274583199387b45f130c8b4fd03aeae83b0803151dec7dff05cd3a0449,PodSandboxId:39f40ea835b343bf401d13c39c68d891e5a26b2b81ac3d75a69f6d7d15111cd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694728753285274666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=933b9baf-2489-4e5b-8543-b2e0949183ec name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	d130ac431feee       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   3baad6a6694a8
	aef3fb15921cf       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   0                   b81063a39c271
	9d5fe30b5ca2b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   a26d1a18d254c
	4670f5aa85146       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052    About a minute ago   Running             kindnet-cni               0                   be7955bd798a9
	ca5bf4347ec24       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      About a minute ago   Running             kube-proxy                0                   175d3142c242c
	8704cb761b531       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      About a minute ago   Running             kube-scheduler            0                   95c167f92ec62
	8fcaa33ed3678       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   fb37d64d24357
	3ac5473f8a18b       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      About a minute ago   Running             kube-apiserver            0                   18b009bd8de43
	fa8e6a2745831       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      About a minute ago   Running             kube-controller-manager   0                   39f40ea835b34
	
	* 
	* ==> coredns [aef3fb15921cfb1889375b07e9a910061ff6657eaf2c7e1c9c7fdfcf8d8728f6] <==
	* [INFO] 10.244.1.2:52655 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157078s
	[INFO] 10.244.0.3:50161 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095989s
	[INFO] 10.244.0.3:58221 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001830594s
	[INFO] 10.244.0.3:50513 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147253s
	[INFO] 10.244.0.3:54991 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051557s
	[INFO] 10.244.0.3:36247 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001079849s
	[INFO] 10.244.0.3:48975 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000038507s
	[INFO] 10.244.0.3:56904 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078822s
	[INFO] 10.244.0.3:49220 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041059s
	[INFO] 10.244.1.2:58716 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234157s
	[INFO] 10.244.1.2:43792 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016077s
	[INFO] 10.244.1.2:51660 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137237s
	[INFO] 10.244.1.2:32920 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090839s
	[INFO] 10.244.0.3:47611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113921s
	[INFO] 10.244.0.3:59167 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080559s
	[INFO] 10.244.0.3:60137 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067994s
	[INFO] 10.244.0.3:53708 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005932s
	[INFO] 10.244.1.2:48012 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247668s
	[INFO] 10.244.1.2:52457 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106448s
	[INFO] 10.244.1.2:50482 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090522s
	[INFO] 10.244.1.2:52480 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100344s
	[INFO] 10.244.0.3:40572 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103735s
	[INFO] 10.244.0.3:49034 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070441s
	[INFO] 10.244.0.3:39596 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000047176s
	[INFO] 10.244.0.3:58366 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00003141s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-124911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-124911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=multinode-124911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T21_59_21_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 21:59:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-124911
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:00:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 21:59:51 +0000   Thu, 14 Sep 2023 21:59:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 21:59:51 +0000   Thu, 14 Sep 2023 21:59:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 21:59:51 +0000   Thu, 14 Sep 2023 21:59:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 21:59:51 +0000   Thu, 14 Sep 2023 21:59:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    multinode-124911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 429a16e07a544f27b6b8d5f36ed8ec0a
	  System UUID:                429a16e0-7a54-4f27-b6b8-d5f36ed8ec0a
	  Boot ID:                    36a381b7-9076-4937-a544-c403e36e3f42
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pmkvp                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-ssj9q                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     88s
	  kube-system                 etcd-multinode-124911                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         101s
	  kube-system                 kindnet-274xj                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      88s
	  kube-system                 kube-apiserver-multinode-124911             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-controller-manager-multinode-124911    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-proxy-2kd4p                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-multinode-124911             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 86s   kube-proxy       
	  Normal  Starting                 101s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s  kubelet          Node multinode-124911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s  kubelet          Node multinode-124911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s  kubelet          Node multinode-124911 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s   node-controller  Node multinode-124911 event: Registered Node multinode-124911 in Controller
	  Normal  NodeReady                80s   kubelet          Node multinode-124911 status is now: NodeReady
	
	
	Name:               multinode-124911-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-124911-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:00:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-124911-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:00:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:00:51 +0000   Thu, 14 Sep 2023 22:00:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:00:51 +0000   Thu, 14 Sep 2023 22:00:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:00:51 +0000   Thu, 14 Sep 2023 22:00:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:00:51 +0000   Thu, 14 Sep 2023 22:00:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.254
	  Hostname:    multinode-124911-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3aefae14bf79416aa65fd41eb4fa5db6
	  System UUID:                3aefae14-bf79-416a-a65f-d41eb4fa5db6
	  Boot ID:                    bd68551a-620c-42fc-a7e9-e2ffd3e3bb0e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-lv55w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-mmwd5               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20s
	  kube-system                 kube-proxy-c4qjg            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientMemory  20s (x5 over 21s)  kubelet          Node multinode-124911-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x5 over 21s)  kubelet          Node multinode-124911-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x5 over 21s)  kubelet          Node multinode-124911-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19s                node-controller  Node multinode-124911-m02 event: Registered Node multinode-124911-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-124911-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Sep14 21:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064180] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.174755] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.620045] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.131473] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.029020] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep14 21:59] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.103688] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.135509] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.105586] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.191902] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +8.267480] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +8.736489] systemd-fstab-generator[1263]: Ignoring "noauto" for root device
	[ +20.967542] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [8fcaa33ed3678c3c412fc43661f376cd54e31e7c08c2c23c4c1c80b4e9efce41] <==
	* {"level":"info","ts":"2023-09-14T21:59:15.210416Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.116:2380"}
	{"level":"info","ts":"2023-09-14T21:59:15.210643Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.116:2380"}
	{"level":"info","ts":"2023-09-14T21:59:15.211446Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8b2d6b6d639b2fdb","initial-advertise-peer-urls":["https://192.168.39.116:2380"],"listen-peer-urls":["https://192.168.39.116:2380"],"advertise-client-urls":["https://192.168.39.116:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.116:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T21:59:15.211513Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T21:59:16.161614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-14T21:59:16.161671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-14T21:59:16.161702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb received MsgPreVoteResp from 8b2d6b6d639b2fdb at term 1"}
	{"level":"info","ts":"2023-09-14T21:59:16.161715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb became candidate at term 2"}
	{"level":"info","ts":"2023-09-14T21:59:16.161721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb received MsgVoteResp from 8b2d6b6d639b2fdb at term 2"}
	{"level":"info","ts":"2023-09-14T21:59:16.161734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb became leader at term 2"}
	{"level":"info","ts":"2023-09-14T21:59:16.161741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8b2d6b6d639b2fdb elected leader 8b2d6b6d639b2fdb at term 2"}
	{"level":"info","ts":"2023-09-14T21:59:16.163148Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8b2d6b6d639b2fdb","local-member-attributes":"{Name:multinode-124911 ClientURLs:[https://192.168.39.116:2379]}","request-path":"/0/members/8b2d6b6d639b2fdb/attributes","cluster-id":"d52e949b9fea4da5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T21:59:16.163225Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:59:16.163515Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:59:16.16446Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T21:59:16.164599Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T21:59:16.165433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.116:2379"}
	{"level":"info","ts":"2023-09-14T21:59:16.166391Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T21:59:16.16643Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T21:59:16.166469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d52e949b9fea4da5","local-member-id":"8b2d6b6d639b2fdb","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:59:16.166547Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T21:59:16.166586Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:00:18.9706Z","caller":"traceutil/trace.go:171","msg":"trace[1735547415] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"119.186378ms","start":"2023-09-14T22:00:18.851381Z","end":"2023-09-14T22:00:18.970567Z","steps":["trace[1735547415] 'process raft request'  (duration: 119.043727ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:00:19.225583Z","caller":"traceutil/trace.go:171","msg":"trace[637988894] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"210.325713ms","start":"2023-09-14T22:00:19.015203Z","end":"2023-09-14T22:00:19.225529Z","steps":["trace[637988894] 'process raft request'  (duration: 144.369576ms)","trace[637988894] 'compare'  (duration: 65.510003ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-14T22:00:45.271744Z","caller":"traceutil/trace.go:171","msg":"trace[1089998668] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"172.07659ms","start":"2023-09-14T22:00:45.099654Z","end":"2023-09-14T22:00:45.271731Z","steps":["trace[1089998668] 'process raft request'  (duration: 171.951796ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  22:01:01 up 2 min,  0 users,  load average: 0.23, 0.11, 0.04
	Linux multinode-124911 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [4670f5aa85146115210955cc92845f2f4e5773168527211735bb4fd959e716b9] <==
	* I0914 21:59:40.870116       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 21:59:40.870144       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 21:59:41.273029       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 21:59:41.365221       1 main.go:227] handling current node
	I0914 21:59:51.378821       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 21:59:51.378941       1 main.go:227] handling current node
	I0914 22:00:01.389099       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:00:01.389209       1 main.go:227] handling current node
	I0914 22:00:11.393583       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:00:11.393677       1 main.go:227] handling current node
	I0914 22:00:21.401533       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:00:21.401705       1 main.go:227] handling current node
	I0914 22:00:31.409013       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:00:31.409400       1 main.go:227] handling current node
	I0914 22:00:41.420519       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:00:41.420556       1 main.go:227] handling current node
	I0914 22:00:51.433148       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:00:51.433192       1 main.go:227] handling current node
	I0914 22:00:51.433203       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0914 22:00:51.433208       1 main.go:250] Node multinode-124911-m02 has CIDR [10.244.1.0/24] 
	I0914 22:00:51.433494       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.254 Flags: [] Table: 0} 
	I0914 22:01:01.450588       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:01:01.451076       1 main.go:227] handling current node
	I0914 22:01:01.451288       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0914 22:01:01.451423       1 main.go:250] Node multinode-124911-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56] <==
	* I0914 21:59:17.605559       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 21:59:17.605582       1 cache.go:39] Caches are synced for autoregister controller
	E0914 21:59:17.658966       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0914 21:59:17.670974       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0914 21:59:17.693817       1 shared_informer.go:318] Caches are synced for configmaps
	I0914 21:59:17.694035       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0914 21:59:17.694069       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 21:59:17.694154       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 21:59:17.695955       1 controller.go:624] quota admission added evaluator for: namespaces
	I0914 21:59:17.862102       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 21:59:18.502661       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0914 21:59:18.507283       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0914 21:59:18.507402       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 21:59:19.072469       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 21:59:19.115788       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 21:59:19.223051       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0914 21:59:19.234422       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.116]
	I0914 21:59:19.235433       1 controller.go:624] quota admission added evaluator for: endpoints
	I0914 21:59:19.239242       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 21:59:19.587383       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 21:59:20.541890       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 21:59:20.558518       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0914 21:59:20.574592       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 21:59:33.214432       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0914 21:59:33.300382       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [fa8e6a274583199387b45f130c8b4fd03aeae83b0803151dec7dff05cd3a0449] <==
	* I0914 21:59:34.051040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.431µs"
	I0914 21:59:41.682578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="113.294µs"
	I0914 21:59:41.715524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.188µs"
	I0914 21:59:42.591643       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0914 21:59:42.927000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.679µs"
	I0914 21:59:43.850924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.168662ms"
	I0914 21:59:43.851043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.02µs"
	I0914 22:00:41.510099       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-124911-m02\" does not exist"
	I0914 22:00:41.520620       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-124911-m02" podCIDRs=["10.244.1.0/24"]
	I0914 22:00:41.548560       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mmwd5"
	I0914 22:00:41.548605       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c4qjg"
	I0914 22:00:42.601044       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-124911-m02"
	I0914 22:00:42.601180       1 event.go:307] "Event occurred" object="multinode-124911-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-124911-m02 event: Registered Node multinode-124911-m02 in Controller"
	I0914 22:00:51.412977       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-124911-m02"
	I0914 22:00:53.903384       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0914 22:00:53.920571       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-lv55w"
	I0914 22:00:53.945158       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-pmkvp"
	I0914 22:00:53.950818       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.337492ms"
	I0914 22:00:53.977641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.628986ms"
	I0914 22:00:53.977766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.203µs"
	I0914 22:00:53.977819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.589µs"
	I0914 22:00:58.079653       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.156456ms"
	I0914 22:00:58.079802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.849µs"
	I0914 22:00:58.353464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.824901ms"
	I0914 22:00:58.354239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.134µs"
	
	* 
	* ==> kube-proxy [ca5bf4347ec24a88b9aeeab649d2f3e60d8285d423ed6e4f4a22570eeef70a8f] <==
	* I0914 21:59:34.820600       1 server_others.go:69] "Using iptables proxy"
	I0914 21:59:34.834598       1 node.go:141] Successfully retrieved node IP: 192.168.39.116
	I0914 21:59:34.876254       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 21:59:34.876412       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 21:59:34.878908       1 server_others.go:152] "Using iptables Proxier"
	I0914 21:59:34.878981       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 21:59:34.879233       1 server.go:846] "Version info" version="v1.28.1"
	I0914 21:59:34.879425       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 21:59:34.880735       1 config.go:188] "Starting service config controller"
	I0914 21:59:34.880820       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 21:59:34.880900       1 config.go:97] "Starting endpoint slice config controller"
	I0914 21:59:34.880965       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 21:59:34.881074       1 config.go:315] "Starting node config controller"
	I0914 21:59:34.881107       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 21:59:34.981590       1 shared_informer.go:318] Caches are synced for node config
	I0914 21:59:34.981593       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 21:59:34.981619       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [8704cb761b531c2d6b1b9578d6b00ffe357b86d7e52efca4fecb89c54f28510a] <==
	* W0914 21:59:17.633403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0914 21:59:17.633543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:59:17.634388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 21:59:17.634845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 21:59:17.634852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 21:59:17.634858       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 21:59:17.634867       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 21:59:17.634874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 21:59:17.634887       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 21:59:17.634895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 21:59:17.634902       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 21:59:17.634928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:59:18.628166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 21:59:18.628224       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 21:59:18.636842       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 21:59:18.636885       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0914 21:59:18.662286       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 21:59:18.662374       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 21:59:18.703562       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 21:59:18.703664       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 21:59:18.749555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 21:59:18.749602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 21:59:18.801534       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 21:59:18.801589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0914 21:59:19.218227       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 21:58:48 UTC, ends at Thu 2023-09-14 22:01:01 UTC. --
	Sep 14 21:59:33 multinode-124911 kubelet[1270]: I0914 21:59:33.433979    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de9e2ee3-364a-447b-9d7f-be85d86838ae-lib-modules\") pod \"kube-proxy-2kd4p\" (UID: \"de9e2ee3-364a-447b-9d7f-be85d86838ae\") " pod="kube-system/kube-proxy-2kd4p"
	Sep 14 21:59:33 multinode-124911 kubelet[1270]: I0914 21:59:33.434056    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b29t\" (UniqueName: \"kubernetes.io/projected/de9e2ee3-364a-447b-9d7f-be85d86838ae-kube-api-access-4b29t\") pod \"kube-proxy-2kd4p\" (UID: \"de9e2ee3-364a-447b-9d7f-be85d86838ae\") " pod="kube-system/kube-proxy-2kd4p"
	Sep 14 21:59:33 multinode-124911 kubelet[1270]: I0914 21:59:33.434086    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6d12f7c0-2ad9-436f-ab5d-528c4823865c-cni-cfg\") pod \"kindnet-274xj\" (UID: \"6d12f7c0-2ad9-436f-ab5d-528c4823865c\") " pod="kube-system/kindnet-274xj"
	Sep 14 21:59:33 multinode-124911 kubelet[1270]: I0914 21:59:33.434112    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d12f7c0-2ad9-436f-ab5d-528c4823865c-lib-modules\") pod \"kindnet-274xj\" (UID: \"6d12f7c0-2ad9-436f-ab5d-528c4823865c\") " pod="kube-system/kindnet-274xj"
	Sep 14 21:59:33 multinode-124911 kubelet[1270]: I0914 21:59:33.434133    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de9e2ee3-364a-447b-9d7f-be85d86838ae-xtables-lock\") pod \"kube-proxy-2kd4p\" (UID: \"de9e2ee3-364a-447b-9d7f-be85d86838ae\") " pod="kube-system/kube-proxy-2kd4p"
	Sep 14 21:59:33 multinode-124911 kubelet[1270]: I0914 21:59:33.434154    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7sw\" (UniqueName: \"kubernetes.io/projected/6d12f7c0-2ad9-436f-ab5d-528c4823865c-kube-api-access-mg7sw\") pod \"kindnet-274xj\" (UID: \"6d12f7c0-2ad9-436f-ab5d-528c4823865c\") " pod="kube-system/kindnet-274xj"
	Sep 14 21:59:33 multinode-124911 kubelet[1270]: I0914 21:59:33.434176    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de9e2ee3-364a-447b-9d7f-be85d86838ae-kube-proxy\") pod \"kube-proxy-2kd4p\" (UID: \"de9e2ee3-364a-447b-9d7f-be85d86838ae\") " pod="kube-system/kube-proxy-2kd4p"
	Sep 14 21:59:33 multinode-124911 kubelet[1270]: I0914 21:59:33.434204    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d12f7c0-2ad9-436f-ab5d-528c4823865c-xtables-lock\") pod \"kindnet-274xj\" (UID: \"6d12f7c0-2ad9-436f-ab5d-528c4823865c\") " pod="kube-system/kindnet-274xj"
	Sep 14 21:59:40 multinode-124911 kubelet[1270]: I0914 21:59:40.669108    1270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2kd4p" podStartSLOduration=7.669069776 podCreationTimestamp="2023-09-14 21:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-14 21:59:34.787418583 +0000 UTC m=+14.267202746" watchObservedRunningTime="2023-09-14 21:59:40.669069776 +0000 UTC m=+20.148853931"
	Sep 14 21:59:41 multinode-124911 kubelet[1270]: I0914 21:59:41.636798    1270 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 14 21:59:41 multinode-124911 kubelet[1270]: I0914 21:59:41.681654    1270 topology_manager.go:215] "Topology Admit Handler" podUID="aadacae8-9f4d-4c24-91c7-78a88d187f73" podNamespace="kube-system" podName="coredns-5dd5756b68-ssj9q"
	Sep 14 21:59:41 multinode-124911 kubelet[1270]: I0914 21:59:41.691275    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aadacae8-9f4d-4c24-91c7-78a88d187f73-config-volume\") pod \"coredns-5dd5756b68-ssj9q\" (UID: \"aadacae8-9f4d-4c24-91c7-78a88d187f73\") " pod="kube-system/coredns-5dd5756b68-ssj9q"
	Sep 14 21:59:41 multinode-124911 kubelet[1270]: I0914 21:59:41.691393    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62bz8\" (UniqueName: \"kubernetes.io/projected/aadacae8-9f4d-4c24-91c7-78a88d187f73-kube-api-access-62bz8\") pod \"coredns-5dd5756b68-ssj9q\" (UID: \"aadacae8-9f4d-4c24-91c7-78a88d187f73\") " pod="kube-system/coredns-5dd5756b68-ssj9q"
	Sep 14 21:59:41 multinode-124911 kubelet[1270]: I0914 21:59:41.693082    1270 topology_manager.go:215] "Topology Admit Handler" podUID="aada9d30-e15d-4405-a7e2-e979dd4b8e0d" podNamespace="kube-system" podName="storage-provisioner"
	Sep 14 21:59:41 multinode-124911 kubelet[1270]: I0914 21:59:41.791865    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aada9d30-e15d-4405-a7e2-e979dd4b8e0d-tmp\") pod \"storage-provisioner\" (UID: \"aada9d30-e15d-4405-a7e2-e979dd4b8e0d\") " pod="kube-system/storage-provisioner"
	Sep 14 21:59:41 multinode-124911 kubelet[1270]: I0914 21:59:41.791914    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7rhk\" (UniqueName: \"kubernetes.io/projected/aada9d30-e15d-4405-a7e2-e979dd4b8e0d-kube-api-access-z7rhk\") pod \"storage-provisioner\" (UID: \"aada9d30-e15d-4405-a7e2-e979dd4b8e0d\") " pod="kube-system/storage-provisioner"
	Sep 14 21:59:42 multinode-124911 kubelet[1270]: I0914 21:59:42.821169    1270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-274xj" podStartSLOduration=6.009886298 podCreationTimestamp="2023-09-14 21:59:33 +0000 UTC" firstStartedPulling="2023-09-14 21:59:36.597561095 +0000 UTC m=+16.077345238" lastFinishedPulling="2023-09-14 21:59:40.408805992 +0000 UTC m=+19.888590137" observedRunningTime="2023-09-14 21:59:41.819436544 +0000 UTC m=+21.299220707" watchObservedRunningTime="2023-09-14 21:59:42.821131197 +0000 UTC m=+22.300915359"
	Sep 14 21:59:42 multinode-124911 kubelet[1270]: I0914 21:59:42.925748    1270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=8.925710939 podCreationTimestamp="2023-09-14 21:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-14 21:59:42.824169114 +0000 UTC m=+22.303953277" watchObservedRunningTime="2023-09-14 21:59:42.925710939 +0000 UTC m=+22.405495102"
	Sep 14 21:59:43 multinode-124911 kubelet[1270]: I0914 21:59:43.829945    1270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ssj9q" podStartSLOduration=10.829912412 podCreationTimestamp="2023-09-14 21:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-14 21:59:42.926450285 +0000 UTC m=+22.406234445" watchObservedRunningTime="2023-09-14 21:59:43.829912412 +0000 UTC m=+23.309696574"
	Sep 14 22:00:20 multinode-124911 kubelet[1270]: E0914 22:00:20.720175    1270 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:00:20 multinode-124911 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:00:20 multinode-124911 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:00:20 multinode-124911 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:00:53 multinode-124911 kubelet[1270]: I0914 22:00:53.956502    1270 topology_manager.go:215] "Topology Admit Handler" podUID="854464d1-c06e-45fe-a6c7-9c8b82f8b8f7" podNamespace="default" podName="busybox-5bc68d56bd-pmkvp"
	Sep 14 22:00:54 multinode-124911 kubelet[1270]: I0914 22:00:54.090431    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v8hg\" (UniqueName: \"kubernetes.io/projected/854464d1-c06e-45fe-a6c7-9c8b82f8b8f7-kube-api-access-7v8hg\") pod \"busybox-5bc68d56bd-pmkvp\" (UID: \"854464d1-c06e-45fe-a6c7-9c8b82f8b8f7\") " pod="default/busybox-5bc68d56bd-pmkvp"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-124911 -n multinode-124911
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-124911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (685.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-124911
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-124911
E0914 22:03:32.190315   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-124911: exit status 82 (2m1.225442242s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-124911"  ...
	* Stopping node "multinode-124911"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-124911" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-124911 --wait=true -v=8 --alsologtostderr
E0914 22:04:29.764886   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:05:52.809443   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:06:36.475856   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 22:08:32.189736   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 22:09:29.765203   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:09:55.237394   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 22:11:36.474839   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 22:12:59.520609   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 22:13:32.189789   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-124911 --wait=true -v=8 --alsologtostderr: (9m21.816793167s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-124911
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-124911 -n multinode-124911
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-124911 logs -n 25: (1.461370877s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-124911 ssh -n                                                                 | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-124911 cp multinode-124911-m02:/home/docker/cp-test.txt                       | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1415921513/001/cp-test_multinode-124911-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n                                                                 | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-124911 cp multinode-124911-m02:/home/docker/cp-test.txt                       | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911:/home/docker/cp-test_multinode-124911-m02_multinode-124911.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n                                                                 | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n multinode-124911 sudo cat                                       | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | /home/docker/cp-test_multinode-124911-m02_multinode-124911.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-124911 cp multinode-124911-m02:/home/docker/cp-test.txt                       | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m03:/home/docker/cp-test_multinode-124911-m02_multinode-124911-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n                                                                 | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n multinode-124911-m03 sudo cat                                   | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | /home/docker/cp-test_multinode-124911-m02_multinode-124911-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-124911 cp testdata/cp-test.txt                                                | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n                                                                 | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-124911 cp multinode-124911-m03:/home/docker/cp-test.txt                       | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1415921513/001/cp-test_multinode-124911-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n                                                                 | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-124911 cp multinode-124911-m03:/home/docker/cp-test.txt                       | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911:/home/docker/cp-test_multinode-124911-m03_multinode-124911.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n                                                                 | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n multinode-124911 sudo cat                                       | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | /home/docker/cp-test_multinode-124911-m03_multinode-124911.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-124911 cp multinode-124911-m03:/home/docker/cp-test.txt                       | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m02:/home/docker/cp-test_multinode-124911-m03_multinode-124911-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n                                                                 | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | multinode-124911-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-124911 ssh -n multinode-124911-m02 sudo cat                                   | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	|         | /home/docker/cp-test_multinode-124911-m03_multinode-124911-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-124911 node stop m03                                                          | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:01 UTC |
	| node    | multinode-124911 node start                                                             | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:01 UTC | 14 Sep 23 22:02 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-124911                                                                | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:02 UTC |                     |
	| stop    | -p multinode-124911                                                                     | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:02 UTC |                     |
	| start   | -p multinode-124911                                                                     | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:04 UTC | 14 Sep 23 22:13 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-124911                                                                | multinode-124911 | jenkins | v1.31.2 | 14 Sep 23 22:13 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:04:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:04:26.296205   29206 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:04:26.296479   29206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:04:26.296489   29206 out.go:309] Setting ErrFile to fd 2...
	I0914 22:04:26.296494   29206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:04:26.296683   29206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:04:26.297195   29206 out.go:303] Setting JSON to false
	I0914 22:04:26.298127   29206 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2809,"bootTime":1694726258,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:04:26.298185   29206 start.go:138] virtualization: kvm guest
	I0914 22:04:26.301215   29206 out.go:177] * [multinode-124911] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:04:26.302506   29206 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:04:26.302547   29206 notify.go:220] Checking for updates...
	I0914 22:04:26.303962   29206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:04:26.305346   29206 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:04:26.306656   29206 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:04:26.307852   29206 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:04:26.309015   29206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:04:26.310786   29206 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:04:26.310890   29206 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:04:26.311514   29206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:04:26.311566   29206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:04:26.325563   29206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0914 22:04:26.326022   29206 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:04:26.326526   29206 main.go:141] libmachine: Using API Version  1
	I0914 22:04:26.326547   29206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:04:26.326888   29206 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:04:26.327094   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:04:26.360071   29206 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:04:26.361380   29206 start.go:298] selected driver: kvm2
	I0914 22:04:26.361393   29206 start.go:902] validating driver "kvm2" against &{Name:multinode-124911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fal
se ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:04:26.361514   29206 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:04:26.361827   29206 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:04:26.361895   29206 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:04:26.375765   29206 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:04:26.376427   29206 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:04:26.376462   29206 cni.go:84] Creating CNI manager for ""
	I0914 22:04:26.376473   29206 cni.go:136] 3 nodes found, recommending kindnet
	I0914 22:04:26.376480   29206 start_flags.go:321] config:
	{Name:multinode-124911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-124911 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pr
ovisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Aut
oPauseInterval:1m0s}
	I0914 22:04:26.376786   29206 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:04:26.378884   29206 out.go:177] * Starting control plane node multinode-124911 in cluster multinode-124911
	I0914 22:04:26.380095   29206 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:04:26.380127   29206 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0914 22:04:26.380135   29206 cache.go:57] Caching tarball of preloaded images
	I0914 22:04:26.380231   29206 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 22:04:26.380247   29206 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 22:04:26.380400   29206 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 22:04:26.380621   29206 start.go:365] acquiring machines lock for multinode-124911: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:04:26.380666   29206 start.go:369] acquired machines lock for "multinode-124911" in 24.321µs
	I0914 22:04:26.380685   29206 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:04:26.380692   29206 fix.go:54] fixHost starting: 
	I0914 22:04:26.380970   29206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:04:26.381005   29206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:04:26.394275   29206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0914 22:04:26.394666   29206 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:04:26.395147   29206 main.go:141] libmachine: Using API Version  1
	I0914 22:04:26.395169   29206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:04:26.395443   29206 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:04:26.395629   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:04:26.395772   29206 main.go:141] libmachine: (multinode-124911) Calling .GetState
	I0914 22:04:26.397154   29206 fix.go:102] recreateIfNeeded on multinode-124911: state=Running err=<nil>
	W0914 22:04:26.397175   29206 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:04:26.399244   29206 out.go:177] * Updating the running kvm2 "multinode-124911" VM ...
	I0914 22:04:26.400582   29206 machine.go:88] provisioning docker machine ...
	I0914 22:04:26.400605   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:04:26.400778   29206 main.go:141] libmachine: (multinode-124911) Calling .GetMachineName
	I0914 22:04:26.400910   29206 buildroot.go:166] provisioning hostname "multinode-124911"
	I0914 22:04:26.400930   29206 main.go:141] libmachine: (multinode-124911) Calling .GetMachineName
	I0914 22:04:26.401044   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:04:26.403215   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:04:26.403675   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:04:26.403703   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:04:26.403855   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:04:26.404043   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:04:26.404168   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:04:26.404267   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:04:26.404450   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:04:26.404773   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 22:04:26.404787   29206 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-124911 && echo "multinode-124911" | sudo tee /etc/hostname
	I0914 22:04:44.795719   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:04:50.875750   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:04:53.947709   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:00.027784   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:03.099750   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:09.179780   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:12.251714   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:18.331740   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:21.403718   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:27.483740   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:30.555724   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:36.635757   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:39.707701   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:45.787792   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:48.859746   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:54.939728   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:05:58.011862   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:04.091761   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:07.163808   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:13.243748   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:16.315829   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:22.395769   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:25.467749   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:31.547726   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:34.619733   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:40.699773   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:43.771752   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:49.851744   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:52.923708   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:06:59.003697   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:02.075946   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:08.155778   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:11.227692   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:17.307710   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:20.379664   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:26.459783   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:29.531668   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:35.611762   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:38.683728   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:44.763762   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:47.835672   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:53.915714   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:07:56.987653   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:03.067729   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:06.139761   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:12.219704   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:15.291758   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:21.371738   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:24.443778   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:30.523777   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:33.595718   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:39.675727   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:42.747776   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:48.827709   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:51.903678   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:08:57.979718   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:09:01.051705   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:09:07.131719   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:09:10.203745   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:09:16.283707   29206 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.116:22: connect: no route to host
	I0914 22:09:19.285869   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:09:19.285903   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:09:19.287984   29206 machine.go:91] provisioned docker machine in 4m52.887378676s
	I0914 22:09:19.288020   29206 fix.go:56] fixHost completed within 4m52.907329203s
	I0914 22:09:19.288026   29206 start.go:83] releasing machines lock for "multinode-124911", held for 4m52.907348791s
	W0914 22:09:19.288053   29206 start.go:688] error starting host: provision: host is not running
	W0914 22:09:19.288145   29206 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0914 22:09:19.288158   29206 start.go:703] Will try again in 5 seconds ...
	I0914 22:09:24.291074   29206 start.go:365] acquiring machines lock for multinode-124911: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:09:24.291198   29206 start.go:369] acquired machines lock for "multinode-124911" in 84.063µs
	I0914 22:09:24.291228   29206 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:09:24.291236   29206 fix.go:54] fixHost starting: 
	I0914 22:09:24.291560   29206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:09:24.291593   29206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:09:24.305991   29206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36327
	I0914 22:09:24.306385   29206 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:09:24.306880   29206 main.go:141] libmachine: Using API Version  1
	I0914 22:09:24.306903   29206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:09:24.307286   29206 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:09:24.307517   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:09:24.307716   29206 main.go:141] libmachine: (multinode-124911) Calling .GetState
	I0914 22:09:24.309363   29206 fix.go:102] recreateIfNeeded on multinode-124911: state=Stopped err=<nil>
	I0914 22:09:24.309382   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	W0914 22:09:24.309535   29206 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:09:24.311291   29206 out.go:177] * Restarting existing kvm2 VM for "multinode-124911" ...
	I0914 22:09:24.312914   29206 main.go:141] libmachine: (multinode-124911) Calling .Start
	I0914 22:09:24.313096   29206 main.go:141] libmachine: (multinode-124911) Ensuring networks are active...
	I0914 22:09:24.313770   29206 main.go:141] libmachine: (multinode-124911) Ensuring network default is active
	I0914 22:09:24.314095   29206 main.go:141] libmachine: (multinode-124911) Ensuring network mk-multinode-124911 is active
	I0914 22:09:24.314423   29206 main.go:141] libmachine: (multinode-124911) Getting domain xml...
	I0914 22:09:24.315080   29206 main.go:141] libmachine: (multinode-124911) Creating domain...
	I0914 22:09:25.524407   29206 main.go:141] libmachine: (multinode-124911) Waiting to get IP...
	I0914 22:09:25.525099   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:25.525495   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:25.525589   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:25.525490   30024 retry.go:31] will retry after 303.259856ms: waiting for machine to come up
	I0914 22:09:25.830183   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:25.830618   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:25.830637   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:25.830567   30024 retry.go:31] will retry after 308.353978ms: waiting for machine to come up
	I0914 22:09:26.140115   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:26.140616   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:26.140648   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:26.140565   30024 retry.go:31] will retry after 450.622013ms: waiting for machine to come up
	I0914 22:09:26.593108   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:26.593546   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:26.593581   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:26.593484   30024 retry.go:31] will retry after 412.886209ms: waiting for machine to come up
	I0914 22:09:27.008064   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:27.008479   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:27.008520   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:27.008460   30024 retry.go:31] will retry after 562.061212ms: waiting for machine to come up
	I0914 22:09:27.572084   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:27.572388   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:27.572428   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:27.572321   30024 retry.go:31] will retry after 625.818805ms: waiting for machine to come up
	I0914 22:09:28.200113   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:28.200537   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:28.200556   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:28.200497   30024 retry.go:31] will retry after 1.133972382s: waiting for machine to come up
	I0914 22:09:29.335965   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:29.336437   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:29.336472   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:29.336394   30024 retry.go:31] will retry after 1.23299227s: waiting for machine to come up
	I0914 22:09:30.570677   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:30.571126   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:30.571162   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:30.571078   30024 retry.go:31] will retry after 1.653700324s: waiting for machine to come up
	I0914 22:09:32.226048   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:32.226493   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:32.226518   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:32.226448   30024 retry.go:31] will retry after 1.743977657s: waiting for machine to come up
	I0914 22:09:33.972313   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:33.972713   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:33.972747   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:33.972654   30024 retry.go:31] will retry after 2.841978699s: waiting for machine to come up
	I0914 22:09:36.816676   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:36.817109   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:36.817139   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:36.817068   30024 retry.go:31] will retry after 2.917602903s: waiting for machine to come up
	I0914 22:09:39.736727   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:39.737143   29206 main.go:141] libmachine: (multinode-124911) DBG | unable to find current IP address of domain multinode-124911 in network mk-multinode-124911
	I0914 22:09:39.737171   29206 main.go:141] libmachine: (multinode-124911) DBG | I0914 22:09:39.737082   30024 retry.go:31] will retry after 3.081110748s: waiting for machine to come up
	I0914 22:09:42.822449   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:42.822839   29206 main.go:141] libmachine: (multinode-124911) Found IP for machine: 192.168.39.116
	I0914 22:09:42.822859   29206 main.go:141] libmachine: (multinode-124911) Reserving static IP address...
	I0914 22:09:42.822878   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has current primary IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:42.823276   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "multinode-124911", mac: "52:54:00:97:3f:c1", ip: "192.168.39.116"} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:42.823311   29206 main.go:141] libmachine: (multinode-124911) DBG | skip adding static IP to network mk-multinode-124911 - found existing host DHCP lease matching {name: "multinode-124911", mac: "52:54:00:97:3f:c1", ip: "192.168.39.116"}
	I0914 22:09:42.823325   29206 main.go:141] libmachine: (multinode-124911) Reserved static IP address: 192.168.39.116
	I0914 22:09:42.823366   29206 main.go:141] libmachine: (multinode-124911) Waiting for SSH to be available...
	I0914 22:09:42.823392   29206 main.go:141] libmachine: (multinode-124911) DBG | Getting to WaitForSSH function...
	I0914 22:09:42.825540   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:42.825933   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:42.825967   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:42.826069   29206 main.go:141] libmachine: (multinode-124911) DBG | Using SSH client type: external
	I0914 22:09:42.826101   29206 main.go:141] libmachine: (multinode-124911) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa (-rw-------)
	I0914 22:09:42.826143   29206 main.go:141] libmachine: (multinode-124911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:09:42.826164   29206 main.go:141] libmachine: (multinode-124911) DBG | About to run SSH command:
	I0914 22:09:42.826174   29206 main.go:141] libmachine: (multinode-124911) DBG | exit 0
	I0914 22:09:42.918871   29206 main.go:141] libmachine: (multinode-124911) DBG | SSH cmd err, output: <nil>: 
	I0914 22:09:42.919215   29206 main.go:141] libmachine: (multinode-124911) Calling .GetConfigRaw
	I0914 22:09:42.919865   29206 main.go:141] libmachine: (multinode-124911) Calling .GetIP
	I0914 22:09:42.922096   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:42.922449   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:42.922486   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:42.922750   29206 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 22:09:42.922912   29206 machine.go:88] provisioning docker machine ...
	I0914 22:09:42.922936   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:09:42.923130   29206 main.go:141] libmachine: (multinode-124911) Calling .GetMachineName
	I0914 22:09:42.923276   29206 buildroot.go:166] provisioning hostname "multinode-124911"
	I0914 22:09:42.923298   29206 main.go:141] libmachine: (multinode-124911) Calling .GetMachineName
	I0914 22:09:42.923424   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:09:42.925673   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:42.926011   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:42.926043   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:42.926171   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:09:42.926342   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:42.926474   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:42.926586   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:09:42.926730   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:09:42.927028   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 22:09:42.927041   29206 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-124911 && echo "multinode-124911" | sudo tee /etc/hostname
	I0914 22:09:43.067435   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-124911
	
	I0914 22:09:43.067479   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:09:43.070177   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.070581   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:43.070612   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.070832   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:09:43.071064   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:43.071244   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:43.071371   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:09:43.071530   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:09:43.071822   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 22:09:43.071840   29206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-124911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-124911/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-124911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:09:43.208100   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:09:43.208127   29206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:09:43.208147   29206 buildroot.go:174] setting up certificates
	I0914 22:09:43.208158   29206 provision.go:83] configureAuth start
	I0914 22:09:43.208170   29206 main.go:141] libmachine: (multinode-124911) Calling .GetMachineName
	I0914 22:09:43.208419   29206 main.go:141] libmachine: (multinode-124911) Calling .GetIP
	I0914 22:09:43.211112   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.211496   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:43.211529   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.211626   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:09:43.214072   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.214370   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:43.214390   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.214546   29206 provision.go:138] copyHostCerts
	I0914 22:09:43.214575   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:09:43.214612   29206 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:09:43.214622   29206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:09:43.214683   29206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:09:43.214753   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:09:43.214774   29206 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:09:43.214782   29206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:09:43.214810   29206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:09:43.214850   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:09:43.214866   29206 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:09:43.214872   29206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:09:43.214893   29206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:09:43.214945   29206 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.multinode-124911 san=[192.168.39.116 192.168.39.116 localhost 127.0.0.1 minikube multinode-124911]
	I0914 22:09:43.307998   29206 provision.go:172] copyRemoteCerts
	I0914 22:09:43.308057   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:09:43.308080   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:09:43.310832   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.311152   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:43.311181   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.311370   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:09:43.311608   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:43.311820   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:09:43.311953   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 22:09:43.404361   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 22:09:43.404443   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:09:43.428208   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 22:09:43.428270   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 22:09:43.451279   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 22:09:43.451340   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:09:43.473976   29206 provision.go:86] duration metric: configureAuth took 265.803775ms
	I0914 22:09:43.474000   29206 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:09:43.474253   29206 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:09:43.474349   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:09:43.476832   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.477253   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:43.477286   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.477428   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:09:43.477615   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:43.477762   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:43.477907   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:09:43.478065   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:09:43.478378   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 22:09:43.478401   29206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:09:43.777334   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:09:43.777357   29206 machine.go:91] provisioned docker machine in 854.432478ms
	I0914 22:09:43.777367   29206 start.go:300] post-start starting for "multinode-124911" (driver="kvm2")
	I0914 22:09:43.777380   29206 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:09:43.777402   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:09:43.777702   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:09:43.777731   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:09:43.780870   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.781272   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:43.781315   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.781511   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:09:43.781696   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:43.781848   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:09:43.781986   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 22:09:43.873419   29206 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:09:43.877524   29206 command_runner.go:130] > NAME=Buildroot
	I0914 22:09:43.877543   29206 command_runner.go:130] > VERSION=2021.02.12-1-g52d8811-dirty
	I0914 22:09:43.877550   29206 command_runner.go:130] > ID=buildroot
	I0914 22:09:43.877558   29206 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 22:09:43.877566   29206 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 22:09:43.877607   29206 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:09:43.877620   29206 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:09:43.877678   29206 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:09:43.877746   29206 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:09:43.877755   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /etc/ssl/certs/134852.pem
	I0914 22:09:43.877830   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:09:43.886396   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:09:43.906422   29206 start.go:303] post-start completed in 129.041617ms
	I0914 22:09:43.906444   29206 fix.go:56] fixHost completed within 19.61520826s
	I0914 22:09:43.906469   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:09:43.909185   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.909562   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:43.909581   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:43.909805   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:09:43.910018   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:43.910180   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:43.910360   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:09:43.910503   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:09:43.910827   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0914 22:09:43.910842   29206 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:09:44.039637   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694729383.991908407
	
	I0914 22:09:44.039655   29206 fix.go:206] guest clock: 1694729383.991908407
	I0914 22:09:44.039662   29206 fix.go:219] Guest: 2023-09-14 22:09:43.991908407 +0000 UTC Remote: 2023-09-14 22:09:43.906448645 +0000 UTC m=+317.641265935 (delta=85.459762ms)
	I0914 22:09:44.039677   29206 fix.go:190] guest clock delta is within tolerance: 85.459762ms
	I0914 22:09:44.039681   29206 start.go:83] releasing machines lock for "multinode-124911", held for 19.748470239s
	I0914 22:09:44.039700   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:09:44.039967   29206 main.go:141] libmachine: (multinode-124911) Calling .GetIP
	I0914 22:09:44.042978   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:44.043447   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:44.043490   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:44.043669   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:09:44.044276   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:09:44.044465   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:09:44.044539   29206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:09:44.044576   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:09:44.044812   29206 ssh_runner.go:195] Run: cat /version.json
	I0914 22:09:44.044841   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:09:44.047402   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:44.047661   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:44.047764   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:44.047798   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:44.048054   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:44.048073   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:09:44.048099   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:44.048259   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:09:44.048293   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:44.048399   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:09:44.048476   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:09:44.048584   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:09:44.048641   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 22:09:44.048686   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 22:09:44.135742   29206 command_runner.go:130] > {"iso_version": "v1.31.0-1694625400-17243", "kicbase_version": "v0.0.40-1694457807-17194", "minikube_version": "v1.31.2", "commit": "b8afb9b4a853f4e7882dbdfb53995784a48fcea7"}
	I0914 22:09:44.136092   29206 ssh_runner.go:195] Run: systemctl --version
	I0914 22:09:44.167477   29206 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 22:09:44.168070   29206 command_runner.go:130] > systemd 247 (247)
	I0914 22:09:44.168101   29206 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0914 22:09:44.168153   29206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:09:44.304246   29206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 22:09:44.310303   29206 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 22:09:44.310692   29206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:09:44.310746   29206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:09:44.323853   29206 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0914 22:09:44.323895   29206 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:09:44.323903   29206 start.go:469] detecting cgroup driver to use...
	I0914 22:09:44.323959   29206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:09:44.337454   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:09:44.349408   29206 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:09:44.349453   29206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:09:44.361953   29206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:09:44.373928   29206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:09:44.386551   29206 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0914 22:09:44.484114   29206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:09:44.605595   29206 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0914 22:09:44.605692   29206 docker.go:212] disabling docker service ...
	I0914 22:09:44.605750   29206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:09:44.618680   29206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:09:44.630705   29206 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0914 22:09:44.630842   29206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:09:44.751875   29206 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0914 22:09:44.751952   29206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:09:44.868574   29206 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0914 22:09:44.868608   29206 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0914 22:09:44.868676   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:09:44.880661   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:09:44.896480   29206 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 22:09:44.896513   29206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:09:44.896553   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:09:44.905051   29206 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:09:44.905119   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:09:44.913625   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:09:44.923752   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:09:44.932413   29206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:09:44.941194   29206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:09:44.948669   29206 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:09:44.948702   29206 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:09:44.948740   29206 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:09:44.960662   29206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:09:44.968273   29206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:09:45.065053   29206 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:09:45.218846   29206 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:09:45.218927   29206 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:09:45.223090   29206 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 22:09:45.223116   29206 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 22:09:45.223126   29206 command_runner.go:130] > Device: 16h/22d	Inode: 734         Links: 1
	I0914 22:09:45.223137   29206 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:09:45.223145   29206 command_runner.go:130] > Access: 2023-09-14 22:09:45.157726050 +0000
	I0914 22:09:45.223156   29206 command_runner.go:130] > Modify: 2023-09-14 22:09:45.157726050 +0000
	I0914 22:09:45.223168   29206 command_runner.go:130] > Change: 2023-09-14 22:09:45.157726050 +0000
	I0914 22:09:45.223174   29206 command_runner.go:130] >  Birth: -
	I0914 22:09:45.223299   29206 start.go:537] Will wait 60s for crictl version
	I0914 22:09:45.223358   29206 ssh_runner.go:195] Run: which crictl
	I0914 22:09:45.226827   29206 command_runner.go:130] > /usr/bin/crictl
	I0914 22:09:45.226883   29206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:09:45.251741   29206 command_runner.go:130] > Version:  0.1.0
	I0914 22:09:45.251767   29206 command_runner.go:130] > RuntimeName:  cri-o
	I0914 22:09:45.251774   29206 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0914 22:09:45.251782   29206 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0914 22:09:45.253320   29206 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:09:45.253427   29206 ssh_runner.go:195] Run: crio --version
	I0914 22:09:45.296273   29206 command_runner.go:130] > crio version 1.24.1
	I0914 22:09:45.296305   29206 command_runner.go:130] > Version:          1.24.1
	I0914 22:09:45.296317   29206 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0914 22:09:45.296324   29206 command_runner.go:130] > GitTreeState:     dirty
	I0914 22:09:45.296340   29206 command_runner.go:130] > BuildDate:        2023-09-13T22:47:54Z
	I0914 22:09:45.296348   29206 command_runner.go:130] > GoVersion:        go1.19.9
	I0914 22:09:45.296363   29206 command_runner.go:130] > Compiler:         gc
	I0914 22:09:45.296384   29206 command_runner.go:130] > Platform:         linux/amd64
	I0914 22:09:45.296392   29206 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:09:45.296414   29206 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:09:45.296426   29206 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:09:45.296432   29206 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:09:45.296569   29206 ssh_runner.go:195] Run: crio --version
	I0914 22:09:45.345527   29206 command_runner.go:130] > crio version 1.24.1
	I0914 22:09:45.345554   29206 command_runner.go:130] > Version:          1.24.1
	I0914 22:09:45.345570   29206 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0914 22:09:45.345577   29206 command_runner.go:130] > GitTreeState:     dirty
	I0914 22:09:45.345593   29206 command_runner.go:130] > BuildDate:        2023-09-13T22:47:54Z
	I0914 22:09:45.345601   29206 command_runner.go:130] > GoVersion:        go1.19.9
	I0914 22:09:45.345609   29206 command_runner.go:130] > Compiler:         gc
	I0914 22:09:45.345616   29206 command_runner.go:130] > Platform:         linux/amd64
	I0914 22:09:45.345628   29206 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:09:45.345645   29206 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:09:45.345652   29206 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:09:45.345665   29206 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:09:45.347576   29206 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:09:45.349026   29206 main.go:141] libmachine: (multinode-124911) Calling .GetIP
	I0914 22:09:45.351273   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:45.351615   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:09:45.351644   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:09:45.351822   29206 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:09:45.355429   29206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:09:45.368382   29206 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:09:45.368449   29206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:09:45.401717   29206 command_runner.go:130] > {
	I0914 22:09:45.401735   29206 command_runner.go:130] >   "images": [
	I0914 22:09:45.401741   29206 command_runner.go:130] >     {
	I0914 22:09:45.401752   29206 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0914 22:09:45.401769   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:45.401777   29206 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0914 22:09:45.401782   29206 command_runner.go:130] >       ],
	I0914 22:09:45.401789   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:45.401802   29206 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0914 22:09:45.401817   29206 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0914 22:09:45.401824   29206 command_runner.go:130] >       ],
	I0914 22:09:45.401834   29206 command_runner.go:130] >       "size": "65249302",
	I0914 22:09:45.401848   29206 command_runner.go:130] >       "uid": null,
	I0914 22:09:45.401855   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:45.401864   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:45.401874   29206 command_runner.go:130] >     },
	I0914 22:09:45.401881   29206 command_runner.go:130] >     {
	I0914 22:09:45.401897   29206 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0914 22:09:45.401908   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:45.401919   29206 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0914 22:09:45.401929   29206 command_runner.go:130] >       ],
	I0914 22:09:45.401941   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:45.401959   29206 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0914 22:09:45.401975   29206 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0914 22:09:45.401985   29206 command_runner.go:130] >       ],
	I0914 22:09:45.401993   29206 command_runner.go:130] >       "size": "750414",
	I0914 22:09:45.402004   29206 command_runner.go:130] >       "uid": {
	I0914 22:09:45.402015   29206 command_runner.go:130] >         "value": "65535"
	I0914 22:09:45.402024   29206 command_runner.go:130] >       },
	I0914 22:09:45.402033   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:45.402045   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:45.402055   29206 command_runner.go:130] >     }
	I0914 22:09:45.402063   29206 command_runner.go:130] >   ]
	I0914 22:09:45.402070   29206 command_runner.go:130] > }
	I0914 22:09:45.402845   29206 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:09:45.402900   29206 ssh_runner.go:195] Run: which lz4
	I0914 22:09:45.406335   29206 command_runner.go:130] > /usr/bin/lz4
	I0914 22:09:45.406359   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0914 22:09:45.406428   29206 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:09:45.410019   29206 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:09:45.410172   29206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:09:45.410193   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:09:46.958864   29206 crio.go:444] Took 1.552456 seconds to copy over tarball
	I0914 22:09:46.958934   29206 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:09:49.631725   29206 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.672768569s)
	I0914 22:09:49.631748   29206 crio.go:451] Took 2.672862 seconds to extract the tarball
	I0914 22:09:49.631756   29206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:09:49.670618   29206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:09:49.706970   29206 command_runner.go:130] > {
	I0914 22:09:49.706994   29206 command_runner.go:130] >   "images": [
	I0914 22:09:49.706999   29206 command_runner.go:130] >     {
	I0914 22:09:49.707018   29206 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0914 22:09:49.707026   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:49.707035   29206 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0914 22:09:49.707043   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707054   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:49.707077   29206 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0914 22:09:49.707090   29206 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0914 22:09:49.707108   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707119   29206 command_runner.go:130] >       "size": "65249302",
	I0914 22:09:49.707129   29206 command_runner.go:130] >       "uid": null,
	I0914 22:09:49.707139   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:49.707151   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:49.707161   29206 command_runner.go:130] >     },
	I0914 22:09:49.707167   29206 command_runner.go:130] >     {
	I0914 22:09:49.707181   29206 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0914 22:09:49.707191   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:49.707204   29206 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 22:09:49.707214   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707224   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:49.707240   29206 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0914 22:09:49.707257   29206 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0914 22:09:49.707266   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707276   29206 command_runner.go:130] >       "size": "31470524",
	I0914 22:09:49.707283   29206 command_runner.go:130] >       "uid": null,
	I0914 22:09:49.707298   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:49.707308   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:49.707318   29206 command_runner.go:130] >     },
	I0914 22:09:49.707327   29206 command_runner.go:130] >     {
	I0914 22:09:49.707340   29206 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0914 22:09:49.707351   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:49.707363   29206 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0914 22:09:49.707373   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707383   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:49.707397   29206 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0914 22:09:49.707411   29206 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0914 22:09:49.707419   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707435   29206 command_runner.go:130] >       "size": "53621675",
	I0914 22:09:49.707442   29206 command_runner.go:130] >       "uid": null,
	I0914 22:09:49.707452   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:49.707458   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:49.707483   29206 command_runner.go:130] >     },
	I0914 22:09:49.707492   29206 command_runner.go:130] >     {
	I0914 22:09:49.707502   29206 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0914 22:09:49.707511   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:49.707519   29206 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0914 22:09:49.707527   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707534   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:49.707548   29206 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0914 22:09:49.707561   29206 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0914 22:09:49.707569   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707579   29206 command_runner.go:130] >       "size": "295456551",
	I0914 22:09:49.707585   29206 command_runner.go:130] >       "uid": {
	I0914 22:09:49.707595   29206 command_runner.go:130] >         "value": "0"
	I0914 22:09:49.707612   29206 command_runner.go:130] >       },
	I0914 22:09:49.707621   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:49.707630   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:49.707639   29206 command_runner.go:130] >     },
	I0914 22:09:49.707648   29206 command_runner.go:130] >     {
	I0914 22:09:49.707665   29206 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0914 22:09:49.707675   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:49.707684   29206 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0914 22:09:49.707693   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707703   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:49.707718   29206 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0914 22:09:49.707734   29206 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0914 22:09:49.707744   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707754   29206 command_runner.go:130] >       "size": "126972880",
	I0914 22:09:49.707763   29206 command_runner.go:130] >       "uid": {
	I0914 22:09:49.707769   29206 command_runner.go:130] >         "value": "0"
	I0914 22:09:49.707777   29206 command_runner.go:130] >       },
	I0914 22:09:49.707787   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:49.707795   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:49.707804   29206 command_runner.go:130] >     },
	I0914 22:09:49.707813   29206 command_runner.go:130] >     {
	I0914 22:09:49.707825   29206 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0914 22:09:49.707835   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:49.707852   29206 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0914 22:09:49.707861   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707868   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:49.707884   29206 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0914 22:09:49.707901   29206 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0914 22:09:49.707910   29206 command_runner.go:130] >       ],
	I0914 22:09:49.707921   29206 command_runner.go:130] >       "size": "123163446",
	I0914 22:09:49.707930   29206 command_runner.go:130] >       "uid": {
	I0914 22:09:49.707940   29206 command_runner.go:130] >         "value": "0"
	I0914 22:09:49.707949   29206 command_runner.go:130] >       },
	I0914 22:09:49.707956   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:49.707966   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:49.707976   29206 command_runner.go:130] >     },
	I0914 22:09:49.707985   29206 command_runner.go:130] >     {
	I0914 22:09:49.707998   29206 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0914 22:09:49.708007   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:49.708017   29206 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0914 22:09:49.708025   29206 command_runner.go:130] >       ],
	I0914 22:09:49.708039   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:49.708054   29206 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0914 22:09:49.708069   29206 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0914 22:09:49.708078   29206 command_runner.go:130] >       ],
	I0914 22:09:49.708089   29206 command_runner.go:130] >       "size": "74680215",
	I0914 22:09:49.708095   29206 command_runner.go:130] >       "uid": null,
	I0914 22:09:49.708104   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:49.708110   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:49.708119   29206 command_runner.go:130] >     },
	I0914 22:09:49.708125   29206 command_runner.go:130] >     {
	I0914 22:09:49.708137   29206 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0914 22:09:49.708147   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:49.708158   29206 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0914 22:09:49.708168   29206 command_runner.go:130] >       ],
	I0914 22:09:49.708177   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:49.708191   29206 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0914 22:09:49.708304   29206 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0914 22:09:49.708319   29206 command_runner.go:130] >       ],
	I0914 22:09:49.708333   29206 command_runner.go:130] >       "size": "61477686",
	I0914 22:09:49.708340   29206 command_runner.go:130] >       "uid": {
	I0914 22:09:49.708350   29206 command_runner.go:130] >         "value": "0"
	I0914 22:09:49.708356   29206 command_runner.go:130] >       },
	I0914 22:09:49.708366   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:49.708377   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:49.708386   29206 command_runner.go:130] >     },
	I0914 22:09:49.708392   29206 command_runner.go:130] >     {
	I0914 22:09:49.708404   29206 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0914 22:09:49.708414   29206 command_runner.go:130] >       "repoTags": [
	I0914 22:09:49.708433   29206 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0914 22:09:49.708442   29206 command_runner.go:130] >       ],
	I0914 22:09:49.708449   29206 command_runner.go:130] >       "repoDigests": [
	I0914 22:09:49.708464   29206 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0914 22:09:49.708479   29206 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0914 22:09:49.708491   29206 command_runner.go:130] >       ],
	I0914 22:09:49.708502   29206 command_runner.go:130] >       "size": "750414",
	I0914 22:09:49.708512   29206 command_runner.go:130] >       "uid": {
	I0914 22:09:49.708521   29206 command_runner.go:130] >         "value": "65535"
	I0914 22:09:49.708530   29206 command_runner.go:130] >       },
	I0914 22:09:49.708537   29206 command_runner.go:130] >       "username": "",
	I0914 22:09:49.708547   29206 command_runner.go:130] >       "spec": null
	I0914 22:09:49.708552   29206 command_runner.go:130] >     }
	I0914 22:09:49.708560   29206 command_runner.go:130] >   ]
	I0914 22:09:49.708569   29206 command_runner.go:130] > }
	I0914 22:09:49.708721   29206 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:09:49.708735   29206 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:09:49.708791   29206 ssh_runner.go:195] Run: crio config
	I0914 22:09:49.761295   29206 command_runner.go:130] ! time="2023-09-14 22:09:49.713204213Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0914 22:09:49.761330   29206 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 22:09:49.766554   29206 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 22:09:49.766582   29206 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 22:09:49.766589   29206 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 22:09:49.766593   29206 command_runner.go:130] > #
	I0914 22:09:49.766599   29206 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 22:09:49.766605   29206 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 22:09:49.766611   29206 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 22:09:49.766619   29206 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 22:09:49.766624   29206 command_runner.go:130] > # reload'.
	I0914 22:09:49.766629   29206 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 22:09:49.766636   29206 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 22:09:49.766642   29206 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 22:09:49.766657   29206 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 22:09:49.766660   29206 command_runner.go:130] > [crio]
	I0914 22:09:49.766666   29206 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 22:09:49.766672   29206 command_runner.go:130] > # containers images, in this directory.
	I0914 22:09:49.766684   29206 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0914 22:09:49.766701   29206 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 22:09:49.766713   29206 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0914 22:09:49.766726   29206 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 22:09:49.766739   29206 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 22:09:49.766747   29206 command_runner.go:130] > storage_driver = "overlay"
	I0914 22:09:49.766756   29206 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 22:09:49.766768   29206 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 22:09:49.766776   29206 command_runner.go:130] > storage_option = [
	I0914 22:09:49.766788   29206 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0914 22:09:49.766795   29206 command_runner.go:130] > ]
	I0914 22:09:49.766808   29206 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 22:09:49.766821   29206 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 22:09:49.766836   29206 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 22:09:49.766845   29206 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 22:09:49.766851   29206 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 22:09:49.766858   29206 command_runner.go:130] > # always happen on a node reboot
	I0914 22:09:49.766863   29206 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 22:09:49.766871   29206 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 22:09:49.766877   29206 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 22:09:49.766891   29206 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 22:09:49.766902   29206 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0914 22:09:49.766912   29206 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 22:09:49.766920   29206 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 22:09:49.766927   29206 command_runner.go:130] > # internal_wipe = true
	I0914 22:09:49.766932   29206 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 22:09:49.766941   29206 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 22:09:49.766947   29206 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 22:09:49.766953   29206 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 22:09:49.766959   29206 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 22:09:49.766965   29206 command_runner.go:130] > [crio.api]
	I0914 22:09:49.766970   29206 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 22:09:49.766976   29206 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 22:09:49.766981   29206 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 22:09:49.766988   29206 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 22:09:49.766994   29206 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 22:09:49.767001   29206 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 22:09:49.767005   29206 command_runner.go:130] > # stream_port = "0"
	I0914 22:09:49.767010   29206 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 22:09:49.767021   29206 command_runner.go:130] > # stream_enable_tls = false
	I0914 22:09:49.767030   29206 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 22:09:49.767034   29206 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 22:09:49.767042   29206 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 22:09:49.767049   29206 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 22:09:49.767055   29206 command_runner.go:130] > # minutes.
	I0914 22:09:49.767059   29206 command_runner.go:130] > # stream_tls_cert = ""
	I0914 22:09:49.767067   29206 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 22:09:49.767074   29206 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 22:09:49.767082   29206 command_runner.go:130] > # stream_tls_key = ""
	I0914 22:09:49.767091   29206 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 22:09:49.767097   29206 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 22:09:49.767105   29206 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 22:09:49.767109   29206 command_runner.go:130] > # stream_tls_ca = ""
	I0914 22:09:49.767116   29206 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:09:49.767123   29206 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0914 22:09:49.767130   29206 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:09:49.767137   29206 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0914 22:09:49.767159   29206 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 22:09:49.767168   29206 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 22:09:49.767172   29206 command_runner.go:130] > [crio.runtime]
	I0914 22:09:49.767177   29206 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 22:09:49.767183   29206 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 22:09:49.767189   29206 command_runner.go:130] > # "nofile=1024:2048"
	I0914 22:09:49.767195   29206 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 22:09:49.767201   29206 command_runner.go:130] > # default_ulimits = [
	I0914 22:09:49.767205   29206 command_runner.go:130] > # ]
	I0914 22:09:49.767211   29206 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 22:09:49.767217   29206 command_runner.go:130] > # no_pivot = false
	I0914 22:09:49.767223   29206 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 22:09:49.767230   29206 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 22:09:49.767237   29206 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 22:09:49.767242   29206 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 22:09:49.767248   29206 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 22:09:49.767254   29206 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:09:49.767259   29206 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0914 22:09:49.767266   29206 command_runner.go:130] > # Cgroup setting for conmon
	I0914 22:09:49.767275   29206 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 22:09:49.767282   29206 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 22:09:49.767288   29206 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 22:09:49.767295   29206 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 22:09:49.767302   29206 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:09:49.767308   29206 command_runner.go:130] > conmon_env = [
	I0914 22:09:49.767314   29206 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 22:09:49.767319   29206 command_runner.go:130] > ]
	I0914 22:09:49.767325   29206 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 22:09:49.767332   29206 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 22:09:49.767338   29206 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 22:09:49.767344   29206 command_runner.go:130] > # default_env = [
	I0914 22:09:49.767347   29206 command_runner.go:130] > # ]
	I0914 22:09:49.767353   29206 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 22:09:49.767359   29206 command_runner.go:130] > # selinux = false
	I0914 22:09:49.767365   29206 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 22:09:49.767373   29206 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 22:09:49.767381   29206 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 22:09:49.767388   29206 command_runner.go:130] > # seccomp_profile = ""
	I0914 22:09:49.767393   29206 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 22:09:49.767399   29206 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 22:09:49.767407   29206 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 22:09:49.767412   29206 command_runner.go:130] > # which might increase security.
	I0914 22:09:49.767419   29206 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0914 22:09:49.767428   29206 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 22:09:49.767434   29206 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 22:09:49.767440   29206 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 22:09:49.767448   29206 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 22:09:49.767453   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:09:49.767460   29206 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 22:09:49.767483   29206 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 22:09:49.767495   29206 command_runner.go:130] > # the cgroup blockio controller.
	I0914 22:09:49.767502   29206 command_runner.go:130] > # blockio_config_file = ""
	I0914 22:09:49.767508   29206 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 22:09:49.767515   29206 command_runner.go:130] > # irqbalance daemon.
	I0914 22:09:49.767522   29206 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 22:09:49.767531   29206 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 22:09:49.767536   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:09:49.767543   29206 command_runner.go:130] > # rdt_config_file = ""
	I0914 22:09:49.767558   29206 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 22:09:49.767565   29206 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 22:09:49.767571   29206 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 22:09:49.767577   29206 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 22:09:49.767583   29206 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 22:09:49.767592   29206 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 22:09:49.767596   29206 command_runner.go:130] > # will be added.
	I0914 22:09:49.767602   29206 command_runner.go:130] > # default_capabilities = [
	I0914 22:09:49.767606   29206 command_runner.go:130] > # 	"CHOWN",
	I0914 22:09:49.767612   29206 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 22:09:49.767617   29206 command_runner.go:130] > # 	"FSETID",
	I0914 22:09:49.767623   29206 command_runner.go:130] > # 	"FOWNER",
	I0914 22:09:49.767627   29206 command_runner.go:130] > # 	"SETGID",
	I0914 22:09:49.767633   29206 command_runner.go:130] > # 	"SETUID",
	I0914 22:09:49.767639   29206 command_runner.go:130] > # 	"SETPCAP",
	I0914 22:09:49.767646   29206 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 22:09:49.767650   29206 command_runner.go:130] > # 	"KILL",
	I0914 22:09:49.767656   29206 command_runner.go:130] > # ]
	I0914 22:09:49.767662   29206 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 22:09:49.767670   29206 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:09:49.767675   29206 command_runner.go:130] > # default_sysctls = [
	I0914 22:09:49.767678   29206 command_runner.go:130] > # ]
	I0914 22:09:49.767685   29206 command_runner.go:130] > # List of devices on the host that a
	I0914 22:09:49.767692   29206 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 22:09:49.767699   29206 command_runner.go:130] > # allowed_devices = [
	I0914 22:09:49.767703   29206 command_runner.go:130] > # 	"/dev/fuse",
	I0914 22:09:49.767707   29206 command_runner.go:130] > # ]
	I0914 22:09:49.767712   29206 command_runner.go:130] > # List of additional devices. specified as
	I0914 22:09:49.767721   29206 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 22:09:49.767732   29206 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 22:09:49.767775   29206 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:09:49.767783   29206 command_runner.go:130] > # additional_devices = [
	I0914 22:09:49.767788   29206 command_runner.go:130] > # ]
	I0914 22:09:49.767793   29206 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 22:09:49.767797   29206 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 22:09:49.767801   29206 command_runner.go:130] > # 	"/etc/cdi",
	I0914 22:09:49.767805   29206 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 22:09:49.767811   29206 command_runner.go:130] > # ]
	I0914 22:09:49.767817   29206 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 22:09:49.767826   29206 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 22:09:49.767836   29206 command_runner.go:130] > # Defaults to false.
	I0914 22:09:49.767844   29206 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 22:09:49.767853   29206 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 22:09:49.767861   29206 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 22:09:49.767866   29206 command_runner.go:130] > # hooks_dir = [
	I0914 22:09:49.767870   29206 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 22:09:49.767876   29206 command_runner.go:130] > # ]
	I0914 22:09:49.767883   29206 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 22:09:49.767891   29206 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 22:09:49.767898   29206 command_runner.go:130] > # its default mounts from the following two files:
	I0914 22:09:49.767907   29206 command_runner.go:130] > #
	I0914 22:09:49.767916   29206 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 22:09:49.767925   29206 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 22:09:49.767933   29206 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 22:09:49.767937   29206 command_runner.go:130] > #
	I0914 22:09:49.767943   29206 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 22:09:49.767951   29206 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 22:09:49.767960   29206 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 22:09:49.767967   29206 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 22:09:49.767972   29206 command_runner.go:130] > #
	I0914 22:09:49.767980   29206 command_runner.go:130] > # default_mounts_file = ""
	I0914 22:09:49.767987   29206 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 22:09:49.767996   29206 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 22:09:49.768002   29206 command_runner.go:130] > pids_limit = 1024
	I0914 22:09:49.768009   29206 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 22:09:49.768017   29206 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 22:09:49.768025   29206 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 22:09:49.768035   29206 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 22:09:49.768043   29206 command_runner.go:130] > # log_size_max = -1
	I0914 22:09:49.768055   29206 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0914 22:09:49.768060   29206 command_runner.go:130] > # log_to_journald = false
	I0914 22:09:49.768068   29206 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 22:09:49.768074   29206 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 22:09:49.768082   29206 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 22:09:49.768087   29206 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 22:09:49.768104   29206 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 22:09:49.768112   29206 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 22:09:49.768130   29206 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 22:09:49.768137   29206 command_runner.go:130] > # read_only = false
	I0914 22:09:49.768143   29206 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 22:09:49.768152   29206 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 22:09:49.768156   29206 command_runner.go:130] > # live configuration reload.
	I0914 22:09:49.768163   29206 command_runner.go:130] > # log_level = "info"
	I0914 22:09:49.768169   29206 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 22:09:49.768176   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:09:49.768183   29206 command_runner.go:130] > # log_filter = ""
	I0914 22:09:49.768192   29206 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 22:09:49.768200   29206 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 22:09:49.768207   29206 command_runner.go:130] > # separated by comma.
	I0914 22:09:49.768214   29206 command_runner.go:130] > # uid_mappings = ""
	I0914 22:09:49.768222   29206 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 22:09:49.768231   29206 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 22:09:49.768236   29206 command_runner.go:130] > # separated by comma.
	I0914 22:09:49.768240   29206 command_runner.go:130] > # gid_mappings = ""
	I0914 22:09:49.768248   29206 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 22:09:49.768254   29206 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:09:49.768262   29206 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:09:49.768269   29206 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 22:09:49.768275   29206 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 22:09:49.768283   29206 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:09:49.768289   29206 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:09:49.768296   29206 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 22:09:49.768302   29206 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 22:09:49.768311   29206 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 22:09:49.768321   29206 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 22:09:49.768325   29206 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 22:09:49.768332   29206 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 22:09:49.768340   29206 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 22:09:49.768345   29206 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 22:09:49.768352   29206 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 22:09:49.768361   29206 command_runner.go:130] > drop_infra_ctr = false
	I0914 22:09:49.768369   29206 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 22:09:49.768377   29206 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 22:09:49.768384   29206 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 22:09:49.768391   29206 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 22:09:49.768396   29206 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 22:09:49.768403   29206 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 22:09:49.768408   29206 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 22:09:49.768415   29206 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 22:09:49.768422   29206 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0914 22:09:49.768428   29206 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 22:09:49.768438   29206 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0914 22:09:49.768447   29206 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0914 22:09:49.768453   29206 command_runner.go:130] > # default_runtime = "runc"
	I0914 22:09:49.768459   29206 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 22:09:49.768468   29206 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 22:09:49.768479   29206 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0914 22:09:49.768487   29206 command_runner.go:130] > # creation as a file is not desired either.
	I0914 22:09:49.768495   29206 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 22:09:49.768502   29206 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 22:09:49.768506   29206 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 22:09:49.768510   29206 command_runner.go:130] > # ]
	I0914 22:09:49.768517   29206 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 22:09:49.768525   29206 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 22:09:49.768532   29206 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0914 22:09:49.768540   29206 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0914 22:09:49.768543   29206 command_runner.go:130] > #
	I0914 22:09:49.768549   29206 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0914 22:09:49.768557   29206 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0914 22:09:49.768561   29206 command_runner.go:130] > #  runtime_type = "oci"
	I0914 22:09:49.768571   29206 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0914 22:09:49.768578   29206 command_runner.go:130] > #  privileged_without_host_devices = false
	I0914 22:09:49.768582   29206 command_runner.go:130] > #  allowed_annotations = []
	I0914 22:09:49.768588   29206 command_runner.go:130] > # Where:
	I0914 22:09:49.768594   29206 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0914 22:09:49.768602   29206 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0914 22:09:49.768610   29206 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 22:09:49.768618   29206 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 22:09:49.768623   29206 command_runner.go:130] > #   in $PATH.
	I0914 22:09:49.768629   29206 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0914 22:09:49.768636   29206 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 22:09:49.768642   29206 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0914 22:09:49.768648   29206 command_runner.go:130] > #   state.
	I0914 22:09:49.768654   29206 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 22:09:49.768663   29206 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 22:09:49.768671   29206 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 22:09:49.768679   29206 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 22:09:49.768689   29206 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 22:09:49.768700   29206 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 22:09:49.768707   29206 command_runner.go:130] > #   The currently recognized values are:
	I0914 22:09:49.768714   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 22:09:49.768723   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 22:09:49.768731   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 22:09:49.768740   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 22:09:49.768747   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 22:09:49.768756   29206 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 22:09:49.768764   29206 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 22:09:49.768772   29206 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0914 22:09:49.768779   29206 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 22:09:49.768785   29206 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 22:09:49.768790   29206 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0914 22:09:49.768796   29206 command_runner.go:130] > runtime_type = "oci"
	I0914 22:09:49.768801   29206 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 22:09:49.768807   29206 command_runner.go:130] > runtime_config_path = ""
	I0914 22:09:49.768811   29206 command_runner.go:130] > monitor_path = ""
	I0914 22:09:49.768818   29206 command_runner.go:130] > monitor_cgroup = ""
	I0914 22:09:49.768824   29206 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 22:09:49.768837   29206 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0914 22:09:49.768843   29206 command_runner.go:130] > # running containers
	I0914 22:09:49.768848   29206 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0914 22:09:49.768856   29206 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0914 22:09:49.768901   29206 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0914 22:09:49.768909   29206 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0914 22:09:49.768917   29206 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0914 22:09:49.768922   29206 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0914 22:09:49.768929   29206 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0914 22:09:49.768933   29206 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0914 22:09:49.768942   29206 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0914 22:09:49.768949   29206 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0914 22:09:49.768955   29206 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 22:09:49.768963   29206 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 22:09:49.768970   29206 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 22:09:49.768980   29206 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 22:09:49.768990   29206 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 22:09:49.768998   29206 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 22:09:49.769009   29206 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 22:09:49.769019   29206 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 22:09:49.769027   29206 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 22:09:49.769036   29206 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 22:09:49.769043   29206 command_runner.go:130] > # Example:
	I0914 22:09:49.769048   29206 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 22:09:49.769055   29206 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 22:09:49.769061   29206 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 22:09:49.769067   29206 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 22:09:49.769073   29206 command_runner.go:130] > # cpuset = 0
	I0914 22:09:49.769077   29206 command_runner.go:130] > # cpushares = "0-1"
	I0914 22:09:49.769080   29206 command_runner.go:130] > # Where:
	I0914 22:09:49.769088   29206 command_runner.go:130] > # The workload name is workload-type.
	I0914 22:09:49.769097   29206 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 22:09:49.769109   29206 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 22:09:49.769121   29206 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 22:09:49.769136   29206 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 22:09:49.769148   29206 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 22:09:49.769153   29206 command_runner.go:130] > # 
	I0914 22:09:49.769162   29206 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 22:09:49.769168   29206 command_runner.go:130] > #
	I0914 22:09:49.769174   29206 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 22:09:49.769182   29206 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 22:09:49.769190   29206 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 22:09:49.769197   29206 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 22:09:49.769205   29206 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 22:09:49.769211   29206 command_runner.go:130] > [crio.image]
	I0914 22:09:49.769217   29206 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 22:09:49.769224   29206 command_runner.go:130] > # default_transport = "docker://"
	I0914 22:09:49.769231   29206 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 22:09:49.769240   29206 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:09:49.769244   29206 command_runner.go:130] > # global_auth_file = ""
	I0914 22:09:49.769250   29206 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 22:09:49.769255   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:09:49.769262   29206 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0914 22:09:49.769271   29206 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 22:09:49.769280   29206 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:09:49.769285   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:09:49.769292   29206 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 22:09:49.769298   29206 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 22:09:49.769306   29206 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 22:09:49.769314   29206 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 22:09:49.769322   29206 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 22:09:49.769326   29206 command_runner.go:130] > # pause_command = "/pause"
	I0914 22:09:49.769332   29206 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 22:09:49.769338   29206 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 22:09:49.769344   29206 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 22:09:49.769350   29206 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 22:09:49.769355   29206 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 22:09:49.769359   29206 command_runner.go:130] > # signature_policy = ""
	I0914 22:09:49.769364   29206 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 22:09:49.769370   29206 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 22:09:49.769374   29206 command_runner.go:130] > # changing them here.
	I0914 22:09:49.769382   29206 command_runner.go:130] > # insecure_registries = [
	I0914 22:09:49.769385   29206 command_runner.go:130] > # ]
	I0914 22:09:49.769393   29206 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 22:09:49.769398   29206 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 22:09:49.769402   29206 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 22:09:49.769407   29206 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 22:09:49.769411   29206 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 22:09:49.769417   29206 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 22:09:49.769420   29206 command_runner.go:130] > # CNI plugins.
	I0914 22:09:49.769424   29206 command_runner.go:130] > [crio.network]
	I0914 22:09:49.769429   29206 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 22:09:49.769434   29206 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 22:09:49.769438   29206 command_runner.go:130] > # cni_default_network = ""
	I0914 22:09:49.769444   29206 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 22:09:49.769448   29206 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 22:09:49.769453   29206 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 22:09:49.769457   29206 command_runner.go:130] > # plugin_dirs = [
	I0914 22:09:49.769461   29206 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 22:09:49.769466   29206 command_runner.go:130] > # ]
	I0914 22:09:49.769472   29206 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 22:09:49.769476   29206 command_runner.go:130] > [crio.metrics]
	I0914 22:09:49.769481   29206 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 22:09:49.769485   29206 command_runner.go:130] > enable_metrics = true
	I0914 22:09:49.769490   29206 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 22:09:49.769495   29206 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 22:09:49.769501   29206 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 22:09:49.769507   29206 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 22:09:49.769512   29206 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 22:09:49.769517   29206 command_runner.go:130] > # metrics_collectors = [
	I0914 22:09:49.769521   29206 command_runner.go:130] > # 	"operations",
	I0914 22:09:49.769526   29206 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 22:09:49.769533   29206 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 22:09:49.769537   29206 command_runner.go:130] > # 	"operations_errors",
	I0914 22:09:49.769544   29206 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 22:09:49.769548   29206 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 22:09:49.769552   29206 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 22:09:49.769561   29206 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 22:09:49.769568   29206 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 22:09:49.769574   29206 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 22:09:49.769579   29206 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 22:09:49.769585   29206 command_runner.go:130] > # 	"containers_oom_total",
	I0914 22:09:49.769592   29206 command_runner.go:130] > # 	"containers_oom",
	I0914 22:09:49.769598   29206 command_runner.go:130] > # 	"processes_defunct",
	I0914 22:09:49.769603   29206 command_runner.go:130] > # 	"operations_total",
	I0914 22:09:49.769609   29206 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 22:09:49.769614   29206 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 22:09:49.769620   29206 command_runner.go:130] > # 	"operations_errors_total",
	I0914 22:09:49.769624   29206 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 22:09:49.769632   29206 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 22:09:49.769636   29206 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 22:09:49.769643   29206 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 22:09:49.769647   29206 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 22:09:49.769654   29206 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 22:09:49.769658   29206 command_runner.go:130] > # ]
	I0914 22:09:49.769668   29206 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 22:09:49.769675   29206 command_runner.go:130] > # metrics_port = 9090
	I0914 22:09:49.769680   29206 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 22:09:49.769688   29206 command_runner.go:130] > # metrics_socket = ""
	I0914 22:09:49.769694   29206 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 22:09:49.769702   29206 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 22:09:49.769710   29206 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 22:09:49.769717   29206 command_runner.go:130] > # certificate on any modification event.
	I0914 22:09:49.769724   29206 command_runner.go:130] > # metrics_cert = ""
	I0914 22:09:49.769729   29206 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 22:09:49.769738   29206 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 22:09:49.769742   29206 command_runner.go:130] > # metrics_key = ""
	I0914 22:09:49.769750   29206 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 22:09:49.769758   29206 command_runner.go:130] > [crio.tracing]
	I0914 22:09:49.769765   29206 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 22:09:49.769770   29206 command_runner.go:130] > # enable_tracing = false
	I0914 22:09:49.769776   29206 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 22:09:49.769783   29206 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 22:09:49.769790   29206 command_runner.go:130] > # Number of samples to collect per million spans.
	I0914 22:09:49.769797   29206 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 22:09:49.769803   29206 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 22:09:49.769809   29206 command_runner.go:130] > [crio.stats]
	I0914 22:09:49.769814   29206 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 22:09:49.769823   29206 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 22:09:49.769827   29206 command_runner.go:130] > # stats_collection_period = 0
	I0914 22:09:49.769912   29206 cni.go:84] Creating CNI manager for ""
	I0914 22:09:49.769923   29206 cni.go:136] 3 nodes found, recommending kindnet
	I0914 22:09:49.769938   29206 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:09:49.769957   29206 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-124911 NodeName:multinode-124911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:09:49.770063   29206 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-124911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:09:49.770127   29206 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-124911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:09:49.770173   29206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:09:49.778662   29206 command_runner.go:130] > kubeadm
	I0914 22:09:49.778672   29206 command_runner.go:130] > kubectl
	I0914 22:09:49.778676   29206 command_runner.go:130] > kubelet
	I0914 22:09:49.778777   29206 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:09:49.778823   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:09:49.786315   29206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0914 22:09:49.800284   29206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:09:49.814212   29206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0914 22:09:49.828692   29206 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0914 22:09:49.831737   29206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:09:49.842308   29206 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911 for IP: 192.168.39.116
	I0914 22:09:49.842339   29206 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:09:49.842510   29206 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:09:49.842572   29206 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:09:49.842661   29206 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key
	I0914 22:09:49.842735   29206 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.key.12d79366
	I0914 22:09:49.842777   29206 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.key
	I0914 22:09:49.842787   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 22:09:49.842799   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 22:09:49.842811   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 22:09:49.842827   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 22:09:49.842847   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 22:09:49.842859   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 22:09:49.842871   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 22:09:49.842883   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 22:09:49.842927   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:09:49.842952   29206 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:09:49.842978   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:09:49.843012   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:09:49.843038   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:09:49.843060   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:09:49.843104   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:09:49.843130   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:09:49.843143   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem -> /usr/share/ca-certificates/13485.pem
	I0914 22:09:49.843154   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /usr/share/ca-certificates/134852.pem
	I0914 22:09:49.843746   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:09:49.864723   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:09:49.885647   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:09:49.905818   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:09:49.926027   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:09:49.947004   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:09:49.967552   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:09:49.988367   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:09:50.010132   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:09:50.035248   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:09:50.057074   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:09:50.077362   29206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:09:50.092776   29206 ssh_runner.go:195] Run: openssl version
	I0914 22:09:50.098257   29206 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0914 22:09:50.098372   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:09:50.107155   29206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:09:50.111180   29206 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:09:50.111237   29206 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:09:50.111271   29206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:09:50.116015   29206 command_runner.go:130] > b5213941
	I0914 22:09:50.116271   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:09:50.125249   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:09:50.134058   29206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:09:50.138176   29206 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:09:50.138204   29206 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:09:50.138247   29206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:09:50.143032   29206 command_runner.go:130] > 51391683
	I0914 22:09:50.143223   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:09:50.152159   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:09:50.161421   29206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:09:50.165504   29206 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:09:50.165533   29206 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:09:50.165569   29206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:09:50.170295   29206 command_runner.go:130] > 3ec20f2e
	I0914 22:09:50.170425   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:09:50.179290   29206 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:09:50.183327   29206 command_runner.go:130] > ca.crt
	I0914 22:09:50.183348   29206 command_runner.go:130] > ca.key
	I0914 22:09:50.183358   29206 command_runner.go:130] > healthcheck-client.crt
	I0914 22:09:50.183371   29206 command_runner.go:130] > healthcheck-client.key
	I0914 22:09:50.183382   29206 command_runner.go:130] > peer.crt
	I0914 22:09:50.183388   29206 command_runner.go:130] > peer.key
	I0914 22:09:50.183398   29206 command_runner.go:130] > server.crt
	I0914 22:09:50.183404   29206 command_runner.go:130] > server.key
	I0914 22:09:50.183462   29206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:09:50.188808   29206 command_runner.go:130] > Certificate will not expire
	I0914 22:09:50.188889   29206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:09:50.193930   29206 command_runner.go:130] > Certificate will not expire
	I0914 22:09:50.194254   29206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:09:50.199053   29206 command_runner.go:130] > Certificate will not expire
	I0914 22:09:50.199272   29206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:09:50.204038   29206 command_runner.go:130] > Certificate will not expire
	I0914 22:09:50.204295   29206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:09:50.209303   29206 command_runner.go:130] > Certificate will not expire
	I0914 22:09:50.209362   29206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:09:50.214223   29206 command_runner.go:130] > Certificate will not expire
	I0914 22:09:50.214493   29206 kubeadm.go:404] StartCluster: {Name:multinode-124911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:09:50.214618   29206 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:09:50.214664   29206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:09:50.243631   29206 cri.go:89] found id: ""
	I0914 22:09:50.243709   29206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:09:50.252607   29206 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0914 22:09:50.252625   29206 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0914 22:09:50.252631   29206 command_runner.go:130] > /var/lib/minikube/etcd:
	I0914 22:09:50.252635   29206 command_runner.go:130] > member
	I0914 22:09:50.252720   29206 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:09:50.252738   29206 kubeadm.go:636] restartCluster start
	I0914 22:09:50.252786   29206 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:09:50.261282   29206 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:50.261891   29206 kubeconfig.go:92] found "multinode-124911" server: "https://192.168.39.116:8443"
	I0914 22:09:50.262289   29206 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:09:50.262569   29206 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:09:50.263144   29206 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 22:09:50.263350   29206 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:09:50.272164   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:50.272231   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:50.282498   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:50.282519   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:50.282580   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:50.295660   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:50.796383   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:50.796449   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:50.810487   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:51.296049   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:51.296146   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:51.306710   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:51.796277   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:51.796394   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:51.807553   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:52.296128   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:52.296219   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:52.307535   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:52.796065   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:52.796142   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:52.807816   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:53.296432   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:53.296525   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:53.307261   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:53.796639   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:53.796713   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:53.807500   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:54.296039   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:54.296133   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:54.308133   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:54.796655   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:54.796724   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:54.807626   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:55.295765   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:55.295833   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:55.307321   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:55.795866   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:55.795959   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:55.808321   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:56.296074   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:56.296148   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:56.307199   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:56.796647   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:56.796736   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:56.807553   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:57.296034   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:57.296116   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:57.306572   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:57.796122   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:57.796202   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:57.807117   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:58.296730   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:58.296816   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:58.307762   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:58.796406   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:58.796469   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:58.807168   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:59.295750   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:59.295820   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:59.306779   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:09:59.795836   29206 api_server.go:166] Checking apiserver status ...
	I0914 22:09:59.795901   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:09:59.806585   29206 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:10:00.272423   29206 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:10:00.272452   29206 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:10:00.272462   29206 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:10:00.272516   29206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:10:00.300225   29206 cri.go:89] found id: ""
	I0914 22:10:00.300290   29206 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:10:00.313900   29206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:10:00.321970   29206 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0914 22:10:00.321997   29206 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0914 22:10:00.322009   29206 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0914 22:10:00.322020   29206 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:10:00.322082   29206 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:10:00.322135   29206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:10:00.331126   29206 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:10:00.331151   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:10:00.445872   29206 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:10:00.445899   29206 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0914 22:10:00.445909   29206 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0914 22:10:00.445919   29206 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:10:00.445934   29206 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0914 22:10:00.445943   29206 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:10:00.445952   29206 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0914 22:10:00.445965   29206 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0914 22:10:00.445982   29206 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:10:00.445995   29206 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:10:00.446008   29206 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:10:00.446019   29206 command_runner.go:130] > [certs] Using the existing "sa" key
	I0914 22:10:00.446049   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:10:00.492138   29206 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:10:00.696816   29206 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:10:01.205904   29206 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:10:01.257592   29206 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:10:01.460562   29206 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:10:01.463145   29206 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.017062668s)
	I0914 22:10:01.463179   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:10:01.643654   29206 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:10:01.643688   29206 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:10:01.643699   29206 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0914 22:10:01.643734   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:10:01.729040   29206 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:10:01.729070   29206 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:10:01.731931   29206 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:10:01.733342   29206 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:10:01.736790   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:10:01.823788   29206 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:10:01.826466   29206 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:10:01.826537   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:10:01.840837   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:10:02.366415   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:10:02.865918   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:10:03.366023   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:10:03.866158   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:10:03.893742   29206 command_runner.go:130] > 1062
	I0914 22:10:03.894049   29206 api_server.go:72] duration metric: took 2.067581545s to wait for apiserver process to appear ...
	I0914 22:10:03.894065   29206 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:10:03.894082   29206 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0914 22:10:03.894490   29206 api_server.go:269] stopped: https://192.168.39.116:8443/healthz: Get "https://192.168.39.116:8443/healthz": dial tcp 192.168.39.116:8443: connect: connection refused
	I0914 22:10:03.894541   29206 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0914 22:10:03.894987   29206 api_server.go:269] stopped: https://192.168.39.116:8443/healthz: Get "https://192.168.39.116:8443/healthz": dial tcp 192.168.39.116:8443: connect: connection refused
	I0914 22:10:04.395344   29206 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0914 22:10:07.613760   29206 api_server.go:279] https://192.168.39.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:10:07.613794   29206 api_server.go:103] status: https://192.168.39.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:10:07.613808   29206 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0914 22:10:07.712476   29206 api_server.go:279] https://192.168.39.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:10:07.712508   29206 api_server.go:103] status: https://192.168.39.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:10:07.895661   29206 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0914 22:10:07.900770   29206 api_server.go:279] https://192.168.39.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:10:07.900806   29206 api_server.go:103] status: https://192.168.39.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:10:08.395281   29206 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0914 22:10:08.400552   29206 api_server.go:279] https://192.168.39.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:10:08.400582   29206 api_server.go:103] status: https://192.168.39.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:10:08.895134   29206 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0914 22:10:08.908990   29206 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I0914 22:10:08.909074   29206 round_trippers.go:463] GET https://192.168.39.116:8443/version
	I0914 22:10:08.909085   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:08.909093   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:08.909101   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:08.926255   29206 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0914 22:10:08.926274   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:08.926281   29206 round_trippers.go:580]     Content-Length: 263
	I0914 22:10:08.926286   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:08 GMT
	I0914 22:10:08.926291   29206 round_trippers.go:580]     Audit-Id: f0b0bb80-6152-41f7-a2c9-3576d7256798
	I0914 22:10:08.926296   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:08.926301   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:08.926306   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:08.926311   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:08.926433   29206 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0914 22:10:08.926536   29206 api_server.go:141] control plane version: v1.28.1
	I0914 22:10:08.926556   29206 api_server.go:131] duration metric: took 5.032483241s to wait for apiserver health ...
	I0914 22:10:08.926566   29206 cni.go:84] Creating CNI manager for ""
	I0914 22:10:08.926580   29206 cni.go:136] 3 nodes found, recommending kindnet
	I0914 22:10:08.928498   29206 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 22:10:08.930202   29206 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 22:10:08.937188   29206 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0914 22:10:08.937214   29206 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0914 22:10:08.937224   29206 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0914 22:10:08.937232   29206 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:10:08.937250   29206 command_runner.go:130] > Access: 2023-09-14 22:09:36.137726050 +0000
	I0914 22:10:08.937262   29206 command_runner.go:130] > Modify: 2023-09-13 23:09:37.000000000 +0000
	I0914 22:10:08.937270   29206 command_runner.go:130] > Change: 2023-09-14 22:09:34.480726050 +0000
	I0914 22:10:08.937277   29206 command_runner.go:130] >  Birth: -
	I0914 22:10:08.937582   29206 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 22:10:08.937597   29206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 22:10:08.967827   29206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 22:10:10.158207   29206 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0914 22:10:10.170154   29206 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0914 22:10:10.173518   29206 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0914 22:10:10.189235   29206 command_runner.go:130] > daemonset.apps/kindnet configured
	I0914 22:10:10.191605   29206 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.22374597s)
	I0914 22:10:10.191643   29206 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:10:10.191740   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 22:10:10.191752   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.191764   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.191775   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.196059   29206 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:10:10.196081   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.196090   29206 round_trippers.go:580]     Audit-Id: 4302858d-3b7d-4102-87c2-82169d2a69aa
	I0914 22:10:10.196096   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.196105   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.196113   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.196125   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.196132   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.197362   29206 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"838"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82646 chars]
	I0914 22:10:10.201659   29206 system_pods.go:59] 12 kube-system pods found
	I0914 22:10:10.201691   29206 system_pods.go:61] "coredns-5dd5756b68-ssj9q" [aadacae8-9f4d-4c24-91c7-78a88d187f73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:10:10.201700   29206 system_pods.go:61] "etcd-multinode-124911" [1b195f1a-48a6-4b46-a819-2aeb9fe4e00c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:10:10.201709   29206 system_pods.go:61] "kindnet-274xj" [6d12f7c0-2ad9-436f-ab5d-528c4823865c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0914 22:10:10.201723   29206 system_pods.go:61] "kindnet-mmwd5" [4f33c106-87c4-42d3-b6ae-eb325637540e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0914 22:10:10.201734   29206 system_pods.go:61] "kindnet-vjv8m" [d5b0f0e4-3bb0-4e77-8a6f-7b350a511f5a] Running
	I0914 22:10:10.201742   29206 system_pods.go:61] "kube-apiserver-multinode-124911" [e9a93d33-82f3-4cfe-9b2c-92560dd09d09] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:10:10.201751   29206 system_pods.go:61] "kube-controller-manager-multinode-124911" [3efae123-9cdd-457a-a317-77370a6c5288] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:10:10.201760   29206 system_pods.go:61] "kube-proxy-2kd4p" [de9e2ee3-364a-447b-9d7f-be85d86838ae] Running
	I0914 22:10:10.201765   29206 system_pods.go:61] "kube-proxy-5tcff" [bfc8d22f-954e-4a49-878e-9d1711d49c40] Running
	I0914 22:10:10.201770   29206 system_pods.go:61] "kube-proxy-c4qjg" [8214b42e-6656-4e01-bc47-82d6ab6592e5] Running
	I0914 22:10:10.201784   29206 system_pods.go:61] "kube-scheduler-multinode-124911" [f8d502b7-9ee7-474e-ab64-9f721ee6970e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:10:10.201793   29206 system_pods.go:61] "storage-provisioner" [aada9d30-e15d-4405-a7e2-e979dd4b8e0d] Running
	I0914 22:10:10.201799   29206 system_pods.go:74] duration metric: took 10.149101ms to wait for pod list to return data ...
	I0914 22:10:10.201809   29206 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:10:10.201868   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes
	I0914 22:10:10.201877   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.201884   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.201889   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.205378   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:10.205396   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.205402   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.205408   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.205413   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.205421   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.205430   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.205442   29206 round_trippers.go:580]     Audit-Id: 68a35fb1-4205-4d3f-b795-d487d4ba75d2
	I0914 22:10:10.205600   29206 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"838"},"items":[{"metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 14683 chars]
	I0914 22:10:10.206307   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:10:10.206327   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:10:10.206336   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:10:10.206340   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:10:10.206344   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:10:10.206350   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:10:10.206356   29206 node_conditions.go:105] duration metric: took 4.541117ms to run NodePressure ...
	I0914 22:10:10.206376   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:10:10.366374   29206 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0914 22:10:10.419815   29206 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0914 22:10:10.421744   29206 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:10:10.421854   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0914 22:10:10.421868   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.421880   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.421894   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.427542   29206 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 22:10:10.427560   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.427567   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.427573   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.427579   29206 round_trippers.go:580]     Audit-Id: eafbbe88-aa65-4763-9b80-758c60caa0e4
	I0914 22:10:10.427584   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.427590   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.427595   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.427974   29206 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"841"},"items":[{"metadata":{"name":"etcd-multinode-124911","namespace":"kube-system","uid":"1b195f1a-48a6-4b46-a819-2aeb9fe4e00c","resourceVersion":"753","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.116:2379","kubernetes.io/config.hash":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.mirror":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.seen":"2023-09-14T21:59:20.641783376Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0914 22:10:10.428946   29206 kubeadm.go:787] kubelet initialised
	I0914 22:10:10.428962   29206 kubeadm.go:788] duration metric: took 7.197095ms waiting for restarted kubelet to initialise ...
	I0914 22:10:10.428969   29206 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:10:10.429021   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 22:10:10.429029   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.429037   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.429042   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.432101   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:10.432116   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.432122   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.432128   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.432133   29206 round_trippers.go:580]     Audit-Id: fb8f856c-d25e-4317-94c7-2d592d851f7f
	I0914 22:10:10.432141   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.432149   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.432157   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.433286   29206 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"841"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82646 chars]
	I0914 22:10:10.435706   29206 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:10.435776   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:10.435785   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.435792   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.435799   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.437692   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:10.437719   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.437725   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.437731   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.437736   29206 round_trippers.go:580]     Audit-Id: 976623c0-54e0-4afb-ab24-071c4effe50e
	I0914 22:10:10.437742   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.437750   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.437755   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.437887   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:10.438273   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:10.438285   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.438292   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.438297   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.440217   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:10.440235   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.440244   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.440253   29206 round_trippers.go:580]     Audit-Id: 1a9f7881-d313-4d36-a439-14a9c283c7d7
	I0914 22:10:10.440260   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.440270   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.440279   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.440293   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.440479   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:10.440802   29206 pod_ready.go:97] node "multinode-124911" hosting pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:10.440819   29206 pod_ready.go:81] duration metric: took 5.095066ms waiting for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	E0914 22:10:10.440826   29206 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-124911" hosting pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:10.440834   29206 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:10.440887   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-124911
	I0914 22:10:10.440896   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.440903   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.440909   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.442797   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:10.442812   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.442819   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.442824   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.442830   29206 round_trippers.go:580]     Audit-Id: 4e73fd97-2a88-40ca-ab1d-5d43fdd065e5
	I0914 22:10:10.442835   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.442840   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.442845   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.442973   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-124911","namespace":"kube-system","uid":"1b195f1a-48a6-4b46-a819-2aeb9fe4e00c","resourceVersion":"753","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.116:2379","kubernetes.io/config.hash":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.mirror":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.seen":"2023-09-14T21:59:20.641783376Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0914 22:10:10.443314   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:10.443327   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.443333   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.443339   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.444961   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:10.444973   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.444980   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.444987   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.444995   29206 round_trippers.go:580]     Audit-Id: 27273e1c-da4b-43bb-b236-92ae56f5cc4f
	I0914 22:10:10.445004   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.445012   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.445025   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.445188   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:10.445469   29206 pod_ready.go:97] node "multinode-124911" hosting pod "etcd-multinode-124911" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:10.445486   29206 pod_ready.go:81] duration metric: took 4.644384ms waiting for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	E0914 22:10:10.445492   29206 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-124911" hosting pod "etcd-multinode-124911" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:10.445508   29206 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:10.445550   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124911
	I0914 22:10:10.445557   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.445564   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.445570   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.447236   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:10.447249   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.447255   29206 round_trippers.go:580]     Audit-Id: a2c98d26-95d5-43ca-aea7-a4efd047193e
	I0914 22:10:10.447260   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.447266   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.447291   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.447297   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.447305   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.447472   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-124911","namespace":"kube-system","uid":"e9a93d33-82f3-4cfe-9b2c-92560dd09d09","resourceVersion":"755","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.116:8443","kubernetes.io/config.hash":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.mirror":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.seen":"2023-09-14T21:59:20.641778793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0914 22:10:10.447817   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:10.447828   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.447835   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.447842   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.449927   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:10.449938   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.449943   29206 round_trippers.go:580]     Audit-Id: 7f803929-3eff-463a-a0f8-15927d60b278
	I0914 22:10:10.449948   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.449954   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.449958   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.449964   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.449969   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.450159   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:10.450427   29206 pod_ready.go:97] node "multinode-124911" hosting pod "kube-apiserver-multinode-124911" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:10.450442   29206 pod_ready.go:81] duration metric: took 4.92371ms waiting for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	E0914 22:10:10.450449   29206 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-124911" hosting pod "kube-apiserver-multinode-124911" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:10.450455   29206 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:10.450514   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124911
	I0914 22:10:10.450522   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.450529   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.450534   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.453477   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:10.453492   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.453501   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.453508   29206 round_trippers.go:580]     Audit-Id: 1ebbc8f7-8a84-44ff-a913-c591fa30f4f8
	I0914 22:10:10.453516   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.453525   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.453537   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.453548   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.453771   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-124911","namespace":"kube-system","uid":"3efae123-9cdd-457a-a317-77370a6c5288","resourceVersion":"745","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.mirror":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.seen":"2023-09-14T21:59:20.641781682Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0914 22:10:10.592490   29206 request.go:629] Waited for 138.323987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:10.592566   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:10.592573   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.592588   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.592602   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.595315   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:10.595336   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.595348   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.595357   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.595366   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.595374   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.595383   29206 round_trippers.go:580]     Audit-Id: 1a0d4e13-ddcd-4143-85e2-671dff325d0a
	I0914 22:10:10.595393   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.595571   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:10.595918   29206 pod_ready.go:97] node "multinode-124911" hosting pod "kube-controller-manager-multinode-124911" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:10.595938   29206 pod_ready.go:81] duration metric: took 145.456041ms waiting for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	E0914 22:10:10.595947   29206 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-124911" hosting pod "kube-controller-manager-multinode-124911" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:10.595955   29206 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:10.792368   29206 request.go:629] Waited for 196.351052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kd4p
	I0914 22:10:10.792436   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kd4p
	I0914 22:10:10.792442   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.792452   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.792461   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.795109   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:10.795128   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.795135   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.795140   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.795146   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.795151   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.795159   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.795164   29206 round_trippers.go:580]     Audit-Id: fbd74a1b-2d75-4a95-a289-0bc03f4007b1
	I0914 22:10:10.795390   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2kd4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"de9e2ee3-364a-447b-9d7f-be85d86838ae","resourceVersion":"820","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0914 22:10:10.992210   29206 request.go:629] Waited for 196.383067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:10.992275   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:10.992280   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:10.992288   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:10.992297   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:10.994908   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:10.994925   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:10.994931   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:10.994937   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:10.994942   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:10.994947   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:10.994952   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:10 GMT
	I0914 22:10:10.994959   29206 round_trippers.go:580]     Audit-Id: 565989b0-f395-4ce6-b3da-456cbf87b567
	I0914 22:10:10.995124   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:10.995526   29206 pod_ready.go:97] node "multinode-124911" hosting pod "kube-proxy-2kd4p" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:10.995550   29206 pod_ready.go:81] duration metric: took 399.589371ms waiting for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	E0914 22:10:10.995559   29206 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-124911" hosting pod "kube-proxy-2kd4p" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:10.995566   29206 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5tcff" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:11.191919   29206 request.go:629] Waited for 196.288725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:10:11.191994   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:10:11.192004   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:11.192014   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:11.192026   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:11.194746   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:11.194768   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:11.194779   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:11 GMT
	I0914 22:10:11.194787   29206 round_trippers.go:580]     Audit-Id: 2ff010af-e0b0-483f-a05f-82cf8f372137
	I0914 22:10:11.194796   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:11.194803   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:11.194812   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:11.194822   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:11.195381   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5tcff","generateName":"kube-proxy-","namespace":"kube-system","uid":"bfc8d22f-954e-4a49-878e-9d1711d49c40","resourceVersion":"705","creationTimestamp":"2023-09-14T22:01:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0914 22:10:11.392323   29206 request.go:629] Waited for 196.540874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:10:11.392383   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:10:11.392394   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:11.392401   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:11.392407   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:11.395058   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:11.395081   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:11.395091   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:11.395099   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:11.395108   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:11.395118   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:11 GMT
	I0914 22:10:11.395135   29206 round_trippers.go:580]     Audit-Id: b7e8f72c-d1db-40fc-a368-d4be9d9bb45e
	I0914 22:10:11.395143   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:11.395526   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m03","uid":"5e8b04da-e8ae-403d-9e94-bb008093a0b9","resourceVersion":"839","creationTimestamp":"2023-09-14T22:02:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0914 22:10:11.395829   29206 pod_ready.go:92] pod "kube-proxy-5tcff" in "kube-system" namespace has status "Ready":"True"
	I0914 22:10:11.395844   29206 pod_ready.go:81] duration metric: took 400.26973ms waiting for pod "kube-proxy-5tcff" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:11.395857   29206 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4qjg" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:11.592281   29206 request.go:629] Waited for 196.342001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4qjg
	I0914 22:10:11.592349   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4qjg
	I0914 22:10:11.592361   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:11.592374   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:11.592384   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:11.594829   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:11.594844   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:11.594855   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:11.594863   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:11.594871   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:11.594878   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:11.594887   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:11 GMT
	I0914 22:10:11.594895   29206 round_trippers.go:580]     Audit-Id: 2c2043da-e572-43ba-b32e-d80661b8dc51
	I0914 22:10:11.595104   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4qjg","generateName":"kube-proxy-","namespace":"kube-system","uid":"8214b42e-6656-4e01-bc47-82d6ab6592e5","resourceVersion":"501","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0914 22:10:11.791906   29206 request.go:629] Waited for 196.288874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:10:11.791969   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:10:11.791975   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:11.791982   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:11.791989   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:11.795629   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:11.795653   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:11.795662   29206 round_trippers.go:580]     Audit-Id: b3458b1a-42f7-47b6-8f0f-e813f5c256fc
	I0914 22:10:11.795670   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:11.795677   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:11.795685   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:11.795693   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:11.795702   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:11 GMT
	I0914 22:10:11.795932   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"568","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I0914 22:10:11.796273   29206 pod_ready.go:92] pod "kube-proxy-c4qjg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:10:11.796291   29206 pod_ready.go:81] duration metric: took 400.423329ms waiting for pod "kube-proxy-c4qjg" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:11.796303   29206 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:11.992734   29206 request.go:629] Waited for 196.364937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 22:10:11.992809   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 22:10:11.992817   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:11.992824   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:11.992833   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:11.995098   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:11.995114   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:11.995121   29206 round_trippers.go:580]     Audit-Id: 871d351c-ff74-4d29-b3ae-134330d39681
	I0914 22:10:11.995126   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:11.995131   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:11.995136   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:11.995141   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:11.995146   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:11 GMT
	I0914 22:10:11.995311   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-124911","namespace":"kube-system","uid":"f8d502b7-9ee7-474e-ab64-9f721ee6970e","resourceVersion":"747","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.mirror":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.seen":"2023-09-14T21:59:20.641782607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0914 22:10:12.191972   29206 request.go:629] Waited for 196.277184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:12.192044   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:12.192054   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:12.192063   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:12.192072   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:12.194298   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:12.194317   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:12.194326   29206 round_trippers.go:580]     Audit-Id: b233cd55-0f43-4b06-8515-c1dc1b1a9c14
	I0914 22:10:12.194334   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:12.194343   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:12.194353   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:12.194363   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:12.194376   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:12 GMT
	I0914 22:10:12.194609   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:12.195006   29206 pod_ready.go:97] node "multinode-124911" hosting pod "kube-scheduler-multinode-124911" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:12.195027   29206 pod_ready.go:81] duration metric: took 398.706993ms waiting for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	E0914 22:10:12.195037   29206 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-124911" hosting pod "kube-scheduler-multinode-124911" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-124911" has status "Ready":"False"
	I0914 22:10:12.195053   29206 pod_ready.go:38] duration metric: took 1.766074756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:10:12.195079   29206 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:10:12.205873   29206 command_runner.go:130] > -16
	I0914 22:10:12.206095   29206 ops.go:34] apiserver oom_adj: -16
	I0914 22:10:12.206109   29206 kubeadm.go:640] restartCluster took 21.953364794s
	I0914 22:10:12.206119   29206 kubeadm.go:406] StartCluster complete in 21.991632012s
	I0914 22:10:12.206137   29206 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:10:12.206217   29206 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:10:12.207072   29206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:10:12.207316   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:10:12.207444   29206 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:10:12.207585   29206 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:10:12.210306   29206 out.go:177] * Enabled addons: 
	I0914 22:10:12.207654   29206 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:10:12.211869   29206 addons.go:502] enable addons completed in 4.435836ms: enabled=[]
	I0914 22:10:12.210570   29206 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:10:12.212234   29206 round_trippers.go:463] GET https://192.168.39.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 22:10:12.212252   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:12.212264   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:12.212276   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:12.214680   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:12.214702   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:12.214712   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:12.214723   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:12.214733   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:12.214742   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:12.214751   29206 round_trippers.go:580]     Content-Length: 291
	I0914 22:10:12.214765   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:12 GMT
	I0914 22:10:12.214774   29206 round_trippers.go:580]     Audit-Id: 3e95031d-98d4-4470-bf02-75777d29fde9
	I0914 22:10:12.214804   29206 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d40ee9-9834-4f82-84c2-51e3c14c181f","resourceVersion":"840","creationTimestamp":"2023-09-14T21:59:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0914 22:10:12.214960   29206 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-124911" context rescaled to 1 replicas
	I0914 22:10:12.214987   29206 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:10:12.218342   29206 out.go:177] * Verifying Kubernetes components...
	I0914 22:10:12.219711   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:10:12.298334   29206 command_runner.go:130] > apiVersion: v1
	I0914 22:10:12.298358   29206 command_runner.go:130] > data:
	I0914 22:10:12.298365   29206 command_runner.go:130] >   Corefile: |
	I0914 22:10:12.298371   29206 command_runner.go:130] >     .:53 {
	I0914 22:10:12.298378   29206 command_runner.go:130] >         log
	I0914 22:10:12.298385   29206 command_runner.go:130] >         errors
	I0914 22:10:12.298391   29206 command_runner.go:130] >         health {
	I0914 22:10:12.298399   29206 command_runner.go:130] >            lameduck 5s
	I0914 22:10:12.298406   29206 command_runner.go:130] >         }
	I0914 22:10:12.298434   29206 command_runner.go:130] >         ready
	I0914 22:10:12.298443   29206 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0914 22:10:12.298453   29206 command_runner.go:130] >            pods insecure
	I0914 22:10:12.298464   29206 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0914 22:10:12.298479   29206 command_runner.go:130] >            ttl 30
	I0914 22:10:12.298485   29206 command_runner.go:130] >         }
	I0914 22:10:12.298492   29206 command_runner.go:130] >         prometheus :9153
	I0914 22:10:12.298499   29206 command_runner.go:130] >         hosts {
	I0914 22:10:12.298511   29206 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0914 22:10:12.298518   29206 command_runner.go:130] >            fallthrough
	I0914 22:10:12.298529   29206 command_runner.go:130] >         }
	I0914 22:10:12.298540   29206 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0914 22:10:12.298551   29206 command_runner.go:130] >            max_concurrent 1000
	I0914 22:10:12.298560   29206 command_runner.go:130] >         }
	I0914 22:10:12.298567   29206 command_runner.go:130] >         cache 30
	I0914 22:10:12.298579   29206 command_runner.go:130] >         loop
	I0914 22:10:12.298590   29206 command_runner.go:130] >         reload
	I0914 22:10:12.298599   29206 command_runner.go:130] >         loadbalance
	I0914 22:10:12.298606   29206 command_runner.go:130] >     }
	I0914 22:10:12.298616   29206 command_runner.go:130] > kind: ConfigMap
	I0914 22:10:12.298622   29206 command_runner.go:130] > metadata:
	I0914 22:10:12.298634   29206 command_runner.go:130] >   creationTimestamp: "2023-09-14T21:59:20Z"
	I0914 22:10:12.298643   29206 command_runner.go:130] >   name: coredns
	I0914 22:10:12.298653   29206 command_runner.go:130] >   namespace: kube-system
	I0914 22:10:12.298660   29206 command_runner.go:130] >   resourceVersion: "363"
	I0914 22:10:12.298672   29206 command_runner.go:130] >   uid: a21783c3-59aa-4441-b3d2-929766f52988
	I0914 22:10:12.301075   29206 node_ready.go:35] waiting up to 6m0s for node "multinode-124911" to be "Ready" ...
	I0914 22:10:12.301244   29206 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:10:12.392371   29206 request.go:629] Waited for 91.217565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:12.392440   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:12.392445   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:12.392452   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:12.392458   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:12.395010   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:12.395033   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:12.395044   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:12.395053   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:12.395059   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:12.395064   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:12.395069   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:12 GMT
	I0914 22:10:12.395075   29206 round_trippers.go:580]     Audit-Id: 1dba6a54-6344-43be-8168-dda11117b4de
	I0914 22:10:12.395346   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:12.592074   29206 request.go:629] Waited for 196.356969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:12.592140   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:12.592145   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:12.592152   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:12.592158   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:12.594611   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:12.594628   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:12.594635   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:12.594640   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:12.594646   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:12.594651   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:12.594659   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:12 GMT
	I0914 22:10:12.594664   29206 round_trippers.go:580]     Audit-Id: 858cf156-88bd-4c2d-8e0f-41fc7b57905a
	I0914 22:10:12.594881   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:13.095939   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:13.095963   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:13.095971   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:13.095977   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:13.098892   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:13.098935   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:13.098945   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:13.098953   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:13.098961   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:13.098973   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:13 GMT
	I0914 22:10:13.098982   29206 round_trippers.go:580]     Audit-Id: 2d8e053a-36ee-4a81-8cea-cdaccf40629b
	I0914 22:10:13.098993   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:13.099214   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:13.595639   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:13.595659   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:13.595667   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:13.595674   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:13.597993   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:13.598015   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:13.598025   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:13.598033   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:13.598041   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:13.598049   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:13 GMT
	I0914 22:10:13.598058   29206 round_trippers.go:580]     Audit-Id: c78654fa-933b-46e4-98b2-2b57c1eec611
	I0914 22:10:13.598065   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:13.598247   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:14.095601   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:14.095622   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:14.095632   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:14.095639   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:14.106527   29206 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0914 22:10:14.106550   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:14.106558   29206 round_trippers.go:580]     Audit-Id: abfd011b-5b47-4029-a9af-44126276a74a
	I0914 22:10:14.106564   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:14.106569   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:14.106574   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:14.106580   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:14.106585   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:14 GMT
	I0914 22:10:14.106978   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:14.596159   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:14.596185   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:14.596203   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:14.596213   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:14.598919   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:14.598943   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:14.598950   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:14.598955   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:14 GMT
	I0914 22:10:14.598960   29206 round_trippers.go:580]     Audit-Id: 47a9c09f-037a-4f9f-a15d-8f87d694047e
	I0914 22:10:14.598965   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:14.598970   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:14.598975   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:14.599184   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:14.599616   29206 node_ready.go:58] node "multinode-124911" has status "Ready":"False"
	I0914 22:10:15.095607   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:15.095627   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:15.095635   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:15.095641   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:15.098377   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:15.098396   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:15.098417   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:15.098424   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:15 GMT
	I0914 22:10:15.098432   29206 round_trippers.go:580]     Audit-Id: b5a087a1-4d7c-4ed9-b2cd-d872429fb77e
	I0914 22:10:15.098440   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:15.098449   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:15.098461   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:15.098876   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:15.595561   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:15.595592   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:15.595605   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:15.595616   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:15.599290   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:15.599311   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:15.599321   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:15.599330   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:15.599337   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:15.599345   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:15 GMT
	I0914 22:10:15.599354   29206 round_trippers.go:580]     Audit-Id: 6513ed11-03f8-4ddb-babe-d6bfe8503943
	I0914 22:10:15.599371   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:15.599530   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:16.096258   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:16.096284   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:16.096296   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:16.096305   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:16.098808   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:16.098830   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:16.098842   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:16.098849   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:16.098857   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:16.098866   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:16.098875   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:16 GMT
	I0914 22:10:16.098882   29206 round_trippers.go:580]     Audit-Id: c86c2f4e-54d0-451c-b2be-4a8db4278671
	I0914 22:10:16.099198   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:16.596310   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:16.596334   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:16.596341   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:16.596347   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:16.598873   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:16.598897   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:16.598907   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:16.598917   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:16 GMT
	I0914 22:10:16.598927   29206 round_trippers.go:580]     Audit-Id: 9bdbac02-beb1-4cf1-a880-c57a17afd96a
	I0914 22:10:16.598935   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:16.598941   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:16.598946   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:16.599549   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:16.599840   29206 node_ready.go:58] node "multinode-124911" has status "Ready":"False"
	I0914 22:10:17.096280   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:17.096305   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:17.096314   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:17.096321   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:17.099683   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:17.099708   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:17.099719   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:17.099747   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:17.099757   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:17.099766   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:17.099777   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:17 GMT
	I0914 22:10:17.099787   29206 round_trippers.go:580]     Audit-Id: 8a982e20-27bc-4304-ad0b-c17351dd6382
	I0914 22:10:17.100338   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:17.596068   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:17.596090   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:17.596103   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:17.596110   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:17.598612   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:17.598634   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:17.598645   29206 round_trippers.go:580]     Audit-Id: 2a773ce7-88bd-4f5f-8c63-d4675ddd371d
	I0914 22:10:17.598652   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:17.598658   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:17.598665   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:17.598672   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:17.598680   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:17 GMT
	I0914 22:10:17.599017   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"740","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0914 22:10:18.095672   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:18.095697   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:18.095705   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:18.095711   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:18.098133   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:18.098154   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:18.098160   29206 round_trippers.go:580]     Audit-Id: c2457f25-8c8c-4691-837d-1c79609f3ab4
	I0914 22:10:18.098167   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:18.098172   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:18.098177   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:18.098182   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:18.098187   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:18 GMT
	I0914 22:10:18.098358   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:18.098644   29206 node_ready.go:49] node "multinode-124911" has status "Ready":"True"
	I0914 22:10:18.098657   29206 node_ready.go:38] duration metric: took 5.797560963s waiting for node "multinode-124911" to be "Ready" ...
	I0914 22:10:18.098664   29206 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:10:18.098716   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 22:10:18.098723   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:18.098730   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:18.098736   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:18.105193   29206 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 22:10:18.105218   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:18.105230   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:18.105240   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:18 GMT
	I0914 22:10:18.105251   29206 round_trippers.go:580]     Audit-Id: 2ca77f77-5c3b-4271-b08e-d187114f24ee
	I0914 22:10:18.105261   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:18.105279   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:18.105290   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:18.107305   29206 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"867"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82215 chars]
	I0914 22:10:18.109736   29206 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:18.109797   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:18.109802   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:18.109809   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:18.109816   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:18.112724   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:18.112741   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:18.112748   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:18.112754   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:18.112759   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:18.112764   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:18 GMT
	I0914 22:10:18.112774   29206 round_trippers.go:580]     Audit-Id: 806d505c-487d-46e6-9e02-c55546a53dfd
	I0914 22:10:18.112783   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:18.113026   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:18.113413   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:18.113423   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:18.113430   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:18.113436   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:18.115293   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:18.115307   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:18.115314   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:18 GMT
	I0914 22:10:18.115319   29206 round_trippers.go:580]     Audit-Id: aeee2155-c6e8-44aa-ab10-e18457595a5e
	I0914 22:10:18.115324   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:18.115329   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:18.115334   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:18.115347   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:18.115507   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:18.115837   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:18.115849   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:18.115856   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:18.115861   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:18.117577   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:18.117593   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:18.117601   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:18 GMT
	I0914 22:10:18.117606   29206 round_trippers.go:580]     Audit-Id: 1812e9d2-1fad-4515-98e7-aa579050bb85
	I0914 22:10:18.117612   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:18.117624   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:18.117642   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:18.117651   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:18.117789   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:18.118198   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:18.118211   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:18.118218   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:18.118224   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:18.119932   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:18.119946   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:18.119952   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:18 GMT
	I0914 22:10:18.119957   29206 round_trippers.go:580]     Audit-Id: 7831071e-377c-438f-90e4-761ab98b4d45
	I0914 22:10:18.119962   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:18.119967   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:18.119972   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:18.119977   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:18.120136   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:18.620732   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:18.620757   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:18.620765   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:18.620771   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:18.624119   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:18.624137   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:18.624143   29206 round_trippers.go:580]     Audit-Id: 354ef36d-a4cd-401c-9986-8ad0405468f9
	I0914 22:10:18.624149   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:18.624154   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:18.624159   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:18.624164   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:18.624169   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:18 GMT
	I0914 22:10:18.624582   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:18.625007   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:18.625021   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:18.625028   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:18.625034   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:18.627048   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:18.627061   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:18.627068   29206 round_trippers.go:580]     Audit-Id: 3d6ab65d-fe36-4220-be9b-d24053dfafbc
	I0914 22:10:18.627073   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:18.627078   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:18.627083   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:18.627091   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:18.627099   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:18 GMT
	I0914 22:10:18.627454   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:19.121121   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:19.121151   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:19.121159   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:19.121165   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:19.125384   29206 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:10:19.125406   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:19.125413   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:19.125418   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:19 GMT
	I0914 22:10:19.125423   29206 round_trippers.go:580]     Audit-Id: a9aa6b84-7d75-48d3-8e00-a402e5034670
	I0914 22:10:19.125428   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:19.125433   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:19.125438   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:19.126287   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:19.126760   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:19.126776   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:19.126784   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:19.126790   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:19.129461   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:19.129479   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:19.129485   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:19.129491   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:19.129498   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:19.129505   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:19 GMT
	I0914 22:10:19.129513   29206 round_trippers.go:580]     Audit-Id: d757078a-11cc-4669-87f9-d35ba744abc1
	I0914 22:10:19.129532   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:19.129793   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:19.621020   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:19.621041   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:19.621049   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:19.621056   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:19.623716   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:19.623739   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:19.623748   29206 round_trippers.go:580]     Audit-Id: c78031b5-d3b9-4ac5-a325-aae3e382ad60
	I0914 22:10:19.623756   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:19.623763   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:19.623772   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:19.623780   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:19.623790   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:19 GMT
	I0914 22:10:19.624007   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:19.624572   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:19.624590   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:19.624600   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:19.624610   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:19.627594   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:19.627607   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:19.627612   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:19.627617   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:19.627622   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:19.627627   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:19 GMT
	I0914 22:10:19.627633   29206 round_trippers.go:580]     Audit-Id: 10d266d1-2a8e-4216-ad92-4809da469ce2
	I0914 22:10:19.627640   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:19.628220   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:20.120620   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:20.120646   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:20.120655   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:20.120661   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:20.124006   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:20.124030   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:20.124040   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:20.124049   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:20 GMT
	I0914 22:10:20.124059   29206 round_trippers.go:580]     Audit-Id: 90b3dd28-d0dd-47c4-b2fd-7e5eed92d9ff
	I0914 22:10:20.124065   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:20.124071   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:20.124076   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:20.124814   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:20.125372   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:20.125404   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:20.125421   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:20.125441   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:20.127574   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:20.127589   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:20.127595   29206 round_trippers.go:580]     Audit-Id: 525a361e-f162-4e1b-a711-0366588db4bd
	I0914 22:10:20.127600   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:20.127606   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:20.127620   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:20.127628   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:20.127639   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:20 GMT
	I0914 22:10:20.128291   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:20.128574   29206 pod_ready.go:102] pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:10:20.621178   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:20.621203   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:20.621212   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:20.621233   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:20.627717   29206 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 22:10:20.627742   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:20.627752   29206 round_trippers.go:580]     Audit-Id: 50c87759-3e3e-4846-bc67-c4a3655acd31
	I0914 22:10:20.627760   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:20.627769   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:20.627777   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:20.627785   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:20.627793   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:20 GMT
	I0914 22:10:20.627973   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:20.628591   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:20.628612   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:20.628623   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:20.628632   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:20.632921   29206 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:10:20.632947   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:20.632957   29206 round_trippers.go:580]     Audit-Id: da319cdc-6f24-429d-b81d-30facfb8a4ca
	I0914 22:10:20.632966   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:20.632974   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:20.632982   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:20.632990   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:20.633003   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:20 GMT
	I0914 22:10:20.633184   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:21.120806   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:21.120830   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:21.120838   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:21.120844   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:21.125576   29206 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:10:21.125599   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:21.125605   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:21.125611   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:21 GMT
	I0914 22:10:21.125616   29206 round_trippers.go:580]     Audit-Id: 6a3e7c1b-150e-4e3b-b754-287aa0d0839e
	I0914 22:10:21.125621   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:21.125626   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:21.125631   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:21.126652   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:21.127052   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:21.127065   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:21.127071   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:21.127077   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:21.136204   29206 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0914 22:10:21.136224   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:21.136230   29206 round_trippers.go:580]     Audit-Id: 76352348-e9a5-4340-86e5-a8a59cf1e92e
	I0914 22:10:21.136236   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:21.136242   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:21.136251   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:21.136256   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:21.136261   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:21 GMT
	I0914 22:10:21.138131   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:21.620608   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:21.620632   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:21.620640   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:21.620645   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:21.623078   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:21.623095   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:21.623104   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:21.623111   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:21.623119   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:21 GMT
	I0914 22:10:21.623128   29206 round_trippers.go:580]     Audit-Id: 153f366e-f6ab-48c6-96b3-c3fccba21b1d
	I0914 22:10:21.623137   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:21.623142   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:21.623347   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:21.623804   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:21.623817   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:21.623824   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:21.623830   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:21.626553   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:21.626571   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:21.626581   29206 round_trippers.go:580]     Audit-Id: 0b932f07-3be9-4d25-bcb7-73831575b15b
	I0914 22:10:21.626587   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:21.626592   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:21.626598   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:21.626606   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:21.626614   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:21 GMT
	I0914 22:10:21.626729   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:22.121419   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:22.121443   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:22.121451   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:22.121456   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:22.125015   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:22.125029   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:22.125038   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:22.125043   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:22 GMT
	I0914 22:10:22.125048   29206 round_trippers.go:580]     Audit-Id: 4d163b3c-5b16-482f-afb6-cbdcfe3ba2d3
	I0914 22:10:22.125053   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:22.125060   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:22.125068   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:22.125223   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:22.125688   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:22.125702   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:22.125709   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:22.125714   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:22.128616   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:22.128638   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:22.128647   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:22 GMT
	I0914 22:10:22.128655   29206 round_trippers.go:580]     Audit-Id: 7d6d42d9-a89e-4680-b387-7376c629d8f1
	I0914 22:10:22.128663   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:22.128671   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:22.128679   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:22.128686   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:22.129235   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:22.129618   29206 pod_ready.go:102] pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:10:22.620835   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:22.620859   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:22.620866   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:22.620872   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:22.623812   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:22.623838   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:22.623848   29206 round_trippers.go:580]     Audit-Id: f897d4b6-be4a-4170-8857-3dab3b9d1efb
	I0914 22:10:22.623856   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:22.623879   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:22.623888   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:22.623899   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:22.623910   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:22 GMT
	I0914 22:10:22.624108   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:22.624673   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:22.624690   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:22.624701   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:22.624712   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:22.626976   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:22.626996   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:22.627005   29206 round_trippers.go:580]     Audit-Id: 3a798eef-edda-4118-9889-15d0e0804be5
	I0914 22:10:22.627013   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:22.627021   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:22.627029   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:22.627041   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:22.627049   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:22 GMT
	I0914 22:10:22.627284   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:23.120971   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:23.121001   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:23.121012   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:23.121021   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:23.123749   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:23.123773   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:23.123783   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:23.123790   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:23.123795   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:23.123800   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:23 GMT
	I0914 22:10:23.123805   29206 round_trippers.go:580]     Audit-Id: 1afdaae1-3460-4ee1-88be-68953067fb17
	I0914 22:10:23.123810   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:23.123987   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:23.124491   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:23.124505   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:23.124512   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:23.124518   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:23.126846   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:23.126860   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:23.126866   29206 round_trippers.go:580]     Audit-Id: 898b9356-08da-4d53-8bf8-7da64ee855f7
	I0914 22:10:23.126871   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:23.126876   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:23.126882   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:23.126890   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:23.126899   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:23 GMT
	I0914 22:10:23.127277   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:23.620963   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:23.620990   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:23.621001   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:23.621010   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:23.624499   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:23.624524   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:23.624535   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:23.624543   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:23.624551   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:23 GMT
	I0914 22:10:23.624559   29206 round_trippers.go:580]     Audit-Id: c9882bc8-b61c-4540-a2ce-c4bcc29c470e
	I0914 22:10:23.624567   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:23.624579   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:23.624739   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:23.625266   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:23.625282   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:23.625292   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:23.625301   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:23.628681   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:23.628701   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:23.628711   29206 round_trippers.go:580]     Audit-Id: cdae3af0-8a0a-46b1-8e75-97ebfbf4a362
	I0914 22:10:23.628720   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:23.628727   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:23.628735   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:23.628747   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:23.628756   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:23 GMT
	I0914 22:10:23.628993   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:24.120627   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:24.120665   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:24.120673   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:24.120679   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:24.124574   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:10:24.124595   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:24.124604   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:24.124609   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:24 GMT
	I0914 22:10:24.124614   29206 round_trippers.go:580]     Audit-Id: 10ea6ab4-4fa8-464d-afe4-57bd38f06df2
	I0914 22:10:24.124619   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:24.124624   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:24.124632   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:24.124942   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:24.125353   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:24.125365   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:24.125372   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:24.125379   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:24.127261   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:24.127295   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:24.127305   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:24.127314   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:24.127329   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:24.127337   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:24.127346   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:24 GMT
	I0914 22:10:24.127359   29206 round_trippers.go:580]     Audit-Id: 11f501a7-9293-466b-bb92-04d67d290eba
	I0914 22:10:24.127460   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:24.621413   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:24.621433   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:24.621441   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:24.621447   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:24.623686   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:24.623704   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:24.623712   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:24.623723   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:24 GMT
	I0914 22:10:24.623731   29206 round_trippers.go:580]     Audit-Id: 091ec05c-9b36-4cd5-a21d-7f9d014a2ead
	I0914 22:10:24.623740   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:24.623757   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:24.623770   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:24.623896   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"751","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0914 22:10:24.624336   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:24.624350   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:24.624360   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:24.624368   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:24.626442   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:24.626463   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:24.626472   29206 round_trippers.go:580]     Audit-Id: af5cde75-026d-45bc-95cc-939e0dc115f1
	I0914 22:10:24.626481   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:24.626488   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:24.626495   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:24.626504   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:24.626515   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:24 GMT
	I0914 22:10:24.626646   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:24.627055   29206 pod_ready.go:102] pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:10:25.120997   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:10:25.121022   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.121030   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.121037   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.123645   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:25.123670   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.123677   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.123683   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.123689   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.123696   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.123706   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.123741   29206 round_trippers.go:580]     Audit-Id: bddad793-95a4-47e2-a565-360709db2aff
	I0914 22:10:25.123959   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"890","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0914 22:10:25.124490   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:25.124507   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.124514   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.124527   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.131528   29206 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 22:10:25.131545   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.131555   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.131564   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.131572   29206 round_trippers.go:580]     Audit-Id: c66ed1bb-8652-4d31-abfe-a166244c696b
	I0914 22:10:25.131581   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.131592   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.131602   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.131798   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:25.132150   29206 pod_ready.go:92] pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace has status "Ready":"True"
	I0914 22:10:25.132168   29206 pod_ready.go:81] duration metric: took 7.022411677s waiting for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.132178   29206 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.132234   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-124911
	I0914 22:10:25.132241   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.132248   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.132256   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.134169   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:25.134188   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.134198   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.134204   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.134209   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.134215   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.134220   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.134224   29206 round_trippers.go:580]     Audit-Id: c985340d-5bf3-435a-aa48-f29a9ad8f79b
	I0914 22:10:25.134412   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-124911","namespace":"kube-system","uid":"1b195f1a-48a6-4b46-a819-2aeb9fe4e00c","resourceVersion":"882","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.116:2379","kubernetes.io/config.hash":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.mirror":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.seen":"2023-09-14T21:59:20.641783376Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0914 22:10:25.134828   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:25.134843   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.134850   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.134857   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.136924   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:25.136944   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.136953   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.136962   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.136971   29206 round_trippers.go:580]     Audit-Id: 1f2b4ab5-6ebe-4eee-ab04-fe9fe45ebaf8
	I0914 22:10:25.136979   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.136991   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.137001   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.137126   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:25.137490   29206 pod_ready.go:92] pod "etcd-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:10:25.137504   29206 pod_ready.go:81] duration metric: took 5.320125ms waiting for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.137518   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.137561   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124911
	I0914 22:10:25.137569   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.137577   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.137583   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.139568   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:25.139587   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.139596   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.139605   29206 round_trippers.go:580]     Audit-Id: 69b5ae2c-0fa0-475c-846a-3c83f8d18347
	I0914 22:10:25.139616   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.139624   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.139634   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.139644   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.139955   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-124911","namespace":"kube-system","uid":"e9a93d33-82f3-4cfe-9b2c-92560dd09d09","resourceVersion":"849","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.116:8443","kubernetes.io/config.hash":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.mirror":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.seen":"2023-09-14T21:59:20.641778793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0914 22:10:25.140333   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:25.140347   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.140353   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.140359   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.143174   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:25.143194   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.143204   29206 round_trippers.go:580]     Audit-Id: a0698de0-d4ba-4114-968d-a7522afad935
	I0914 22:10:25.143210   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.143219   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.143228   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.143237   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.143246   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.143376   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:25.143737   29206 pod_ready.go:92] pod "kube-apiserver-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:10:25.143754   29206 pod_ready.go:81] duration metric: took 6.226471ms waiting for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.143765   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.143838   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124911
	I0914 22:10:25.143850   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.143862   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.143874   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.145427   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:25.145442   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.145452   29206 round_trippers.go:580]     Audit-Id: 1172b9e8-54fa-45b8-bcd4-fa67e242c090
	I0914 22:10:25.145459   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.145467   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.145477   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.145486   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.145497   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.145777   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-124911","namespace":"kube-system","uid":"3efae123-9cdd-457a-a317-77370a6c5288","resourceVersion":"854","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.mirror":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.seen":"2023-09-14T21:59:20.641781682Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0914 22:10:25.146086   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:25.146097   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.146104   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.146110   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.147879   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:25.147893   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.147903   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.147912   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.147920   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.147930   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.147940   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.147957   29206 round_trippers.go:580]     Audit-Id: 98c5454f-334f-48a1-9e18-d23baa8b0463
	I0914 22:10:25.148094   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:25.148453   29206 pod_ready.go:92] pod "kube-controller-manager-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:10:25.148468   29206 pod_ready.go:81] duration metric: took 4.692797ms waiting for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.148477   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.148523   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kd4p
	I0914 22:10:25.148531   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.148537   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.148543   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.150775   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:25.150788   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.150796   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.150805   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.150814   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.150837   29206 round_trippers.go:580]     Audit-Id: a9601e17-0e3c-4138-97f4-5673d6a92b17
	I0914 22:10:25.150850   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.150862   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.151361   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2kd4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"de9e2ee3-364a-447b-9d7f-be85d86838ae","resourceVersion":"820","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0914 22:10:25.151699   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:25.151712   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.151721   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.151730   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.154295   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:25.154312   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.154319   29206 round_trippers.go:580]     Audit-Id: c28ca18c-10fc-4ed3-8acc-c4bdec5eb640
	I0914 22:10:25.154327   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.154335   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.154343   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.154350   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.154362   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.155077   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:25.155356   29206 pod_ready.go:92] pod "kube-proxy-2kd4p" in "kube-system" namespace has status "Ready":"True"
	I0914 22:10:25.155367   29206 pod_ready.go:81] duration metric: took 6.883539ms waiting for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.155375   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5tcff" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.321758   29206 request.go:629] Waited for 166.327218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:10:25.321841   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:10:25.321849   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.321861   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.321872   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.324511   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:25.324530   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.324536   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.324542   29206 round_trippers.go:580]     Audit-Id: 70704f1c-86e6-45bd-9bf9-7ded869c77d5
	I0914 22:10:25.324547   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.324552   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.324557   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.324564   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.324750   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5tcff","generateName":"kube-proxy-","namespace":"kube-system","uid":"bfc8d22f-954e-4a49-878e-9d1711d49c40","resourceVersion":"705","creationTimestamp":"2023-09-14T22:01:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0914 22:10:25.521584   29206 request.go:629] Waited for 196.411404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:10:25.521650   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:10:25.521656   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.521664   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.521670   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.524177   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:25.524203   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.524212   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.524220   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.524227   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.524235   29206 round_trippers.go:580]     Audit-Id: 7cff5aea-ed6c-4e33-8b74-407523409486
	I0914 22:10:25.524249   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.524258   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.524390   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m03","uid":"5e8b04da-e8ae-403d-9e94-bb008093a0b9","resourceVersion":"839","creationTimestamp":"2023-09-14T22:02:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0914 22:10:25.524671   29206 pod_ready.go:92] pod "kube-proxy-5tcff" in "kube-system" namespace has status "Ready":"True"
	I0914 22:10:25.524686   29206 pod_ready.go:81] duration metric: took 369.305507ms waiting for pod "kube-proxy-5tcff" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.524695   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4qjg" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.721779   29206 request.go:629] Waited for 197.013649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4qjg
	I0914 22:10:25.721858   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4qjg
	I0914 22:10:25.721865   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.721877   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.721902   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.724539   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:25.724560   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.724567   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.724572   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.724577   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.724582   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.724588   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.724592   29206 round_trippers.go:580]     Audit-Id: 5aa3db0a-5f06-43d5-9eed-59b8703a3842
	I0914 22:10:25.724794   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4qjg","generateName":"kube-proxy-","namespace":"kube-system","uid":"8214b42e-6656-4e01-bc47-82d6ab6592e5","resourceVersion":"501","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0914 22:10:25.921638   29206 request.go:629] Waited for 196.42137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:10:25.921694   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:10:25.921699   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:25.921706   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:25.921712   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:25.924250   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:25.924268   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:25.924273   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:25 GMT
	I0914 22:10:25.924279   29206 round_trippers.go:580]     Audit-Id: 01551c00-d390-4529-9ebc-2e9d190d8c09
	I0914 22:10:25.924284   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:25.924290   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:25.924295   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:25.924300   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:25.924451   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"cd983e44-fc71-4637-af68-c9e7572bc178","resourceVersion":"852","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I0914 22:10:25.924699   29206 pod_ready.go:92] pod "kube-proxy-c4qjg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:10:25.924712   29206 pod_ready.go:81] duration metric: took 400.00726ms waiting for pod "kube-proxy-c4qjg" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:25.924721   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:26.121099   29206 request.go:629] Waited for 196.293488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 22:10:26.121157   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 22:10:26.121162   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:26.121169   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:26.121177   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:26.123828   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:26.123845   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:26.123851   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:26.123856   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:26 GMT
	I0914 22:10:26.123864   29206 round_trippers.go:580]     Audit-Id: abb0a5d3-c02e-400e-8dcd-aea9b0acb625
	I0914 22:10:26.123873   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:26.123881   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:26.123890   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:26.124155   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-124911","namespace":"kube-system","uid":"f8d502b7-9ee7-474e-ab64-9f721ee6970e","resourceVersion":"864","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.mirror":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.seen":"2023-09-14T21:59:20.641782607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0914 22:10:26.321924   29206 request.go:629] Waited for 197.435387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:26.321975   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:10:26.321981   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:26.321988   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:26.321994   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:26.324761   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:26.324783   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:26.324792   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:26.324800   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:26.324808   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:26 GMT
	I0914 22:10:26.324816   29206 round_trippers.go:580]     Audit-Id: 42bba80d-db2b-46ee-b677-47c5a0892954
	I0914 22:10:26.324824   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:26.324833   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:26.325174   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0914 22:10:26.325490   29206 pod_ready.go:92] pod "kube-scheduler-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:10:26.325503   29206 pod_ready.go:81] duration metric: took 400.776488ms waiting for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:10:26.325512   29206 pod_ready.go:38] duration metric: took 8.226840551s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:10:26.325525   29206 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:10:26.325567   29206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:10:26.338176   29206 command_runner.go:130] > 1062
	I0914 22:10:26.338202   29206 api_server.go:72] duration metric: took 14.123194296s to wait for apiserver process to appear ...
	I0914 22:10:26.338209   29206 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:10:26.338221   29206 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0914 22:10:26.343970   29206 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I0914 22:10:26.344028   29206 round_trippers.go:463] GET https://192.168.39.116:8443/version
	I0914 22:10:26.344036   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:26.344044   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:26.344053   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:26.345180   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:10:26.345199   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:26.345209   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:26.345217   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:26.345226   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:26.345239   29206 round_trippers.go:580]     Content-Length: 263
	I0914 22:10:26.345247   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:26 GMT
	I0914 22:10:26.345255   29206 round_trippers.go:580]     Audit-Id: b55e2a43-81aa-4018-8a25-22689d4d0497
	I0914 22:10:26.345267   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:26.345311   29206 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0914 22:10:26.345363   29206 api_server.go:141] control plane version: v1.28.1
	I0914 22:10:26.345375   29206 api_server.go:131] duration metric: took 7.160394ms to wait for apiserver health ...
	I0914 22:10:26.345387   29206 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:10:26.521635   29206 request.go:629] Waited for 176.158265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 22:10:26.521683   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 22:10:26.521688   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:26.521704   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:26.521719   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:26.525965   29206 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:10:26.525988   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:26.526017   29206 round_trippers.go:580]     Audit-Id: 36a26f9e-21e8-4d33-a1c2-becbe49f399a
	I0914 22:10:26.526034   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:26.526042   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:26.526050   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:26.526058   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:26.526067   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:26 GMT
	I0914 22:10:26.527760   29206 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"890","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81886 chars]
	I0914 22:10:26.530126   29206 system_pods.go:59] 12 kube-system pods found
	I0914 22:10:26.530145   29206 system_pods.go:61] "coredns-5dd5756b68-ssj9q" [aadacae8-9f4d-4c24-91c7-78a88d187f73] Running
	I0914 22:10:26.530152   29206 system_pods.go:61] "etcd-multinode-124911" [1b195f1a-48a6-4b46-a819-2aeb9fe4e00c] Running
	I0914 22:10:26.530159   29206 system_pods.go:61] "kindnet-274xj" [6d12f7c0-2ad9-436f-ab5d-528c4823865c] Running
	I0914 22:10:26.530165   29206 system_pods.go:61] "kindnet-mmwd5" [4f33c106-87c4-42d3-b6ae-eb325637540e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0914 22:10:26.530174   29206 system_pods.go:61] "kindnet-vjv8m" [d5b0f0e4-3bb0-4e77-8a6f-7b350a511f5a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0914 22:10:26.530181   29206 system_pods.go:61] "kube-apiserver-multinode-124911" [e9a93d33-82f3-4cfe-9b2c-92560dd09d09] Running
	I0914 22:10:26.530187   29206 system_pods.go:61] "kube-controller-manager-multinode-124911" [3efae123-9cdd-457a-a317-77370a6c5288] Running
	I0914 22:10:26.530191   29206 system_pods.go:61] "kube-proxy-2kd4p" [de9e2ee3-364a-447b-9d7f-be85d86838ae] Running
	I0914 22:10:26.530196   29206 system_pods.go:61] "kube-proxy-5tcff" [bfc8d22f-954e-4a49-878e-9d1711d49c40] Running
	I0914 22:10:26.530200   29206 system_pods.go:61] "kube-proxy-c4qjg" [8214b42e-6656-4e01-bc47-82d6ab6592e5] Running
	I0914 22:10:26.530206   29206 system_pods.go:61] "kube-scheduler-multinode-124911" [f8d502b7-9ee7-474e-ab64-9f721ee6970e] Running
	I0914 22:10:26.530210   29206 system_pods.go:61] "storage-provisioner" [aada9d30-e15d-4405-a7e2-e979dd4b8e0d] Running
	I0914 22:10:26.530219   29206 system_pods.go:74] duration metric: took 184.825521ms to wait for pod list to return data ...
	I0914 22:10:26.530227   29206 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:10:26.721677   29206 request.go:629] Waited for 191.372541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0914 22:10:26.721758   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/default/serviceaccounts
	I0914 22:10:26.721767   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:26.721774   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:26.721781   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:26.724581   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:26.724605   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:26.724613   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:26.724619   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:26.724623   29206 round_trippers.go:580]     Content-Length: 261
	I0914 22:10:26.724629   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:26 GMT
	I0914 22:10:26.724634   29206 round_trippers.go:580]     Audit-Id: d387de03-1e52-4902-b030-8f39dd8cdded
	I0914 22:10:26.724639   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:26.724644   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:26.724662   29206 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2fad9e6d-ab87-4f3f-9379-cd375b431267","resourceVersion":"303","creationTimestamp":"2023-09-14T21:59:32Z"}}]}
	I0914 22:10:26.724840   29206 default_sa.go:45] found service account: "default"
	I0914 22:10:26.724856   29206 default_sa.go:55] duration metric: took 194.621855ms for default service account to be created ...
	I0914 22:10:26.724864   29206 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:10:26.921250   29206 request.go:629] Waited for 196.313967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 22:10:26.921325   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 22:10:26.921332   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:26.921340   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:26.921353   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:26.926629   29206 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 22:10:26.926657   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:26.926667   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:26 GMT
	I0914 22:10:26.926675   29206 round_trippers.go:580]     Audit-Id: d75f4078-0847-47b0-81e2-4f4e13539655
	I0914 22:10:26.926682   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:26.926690   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:26.926699   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:26.926708   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:26.928063   29206 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"890","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81886 chars]
	I0914 22:10:26.930958   29206 system_pods.go:86] 12 kube-system pods found
	I0914 22:10:26.930986   29206 system_pods.go:89] "coredns-5dd5756b68-ssj9q" [aadacae8-9f4d-4c24-91c7-78a88d187f73] Running
	I0914 22:10:26.930994   29206 system_pods.go:89] "etcd-multinode-124911" [1b195f1a-48a6-4b46-a819-2aeb9fe4e00c] Running
	I0914 22:10:26.931006   29206 system_pods.go:89] "kindnet-274xj" [6d12f7c0-2ad9-436f-ab5d-528c4823865c] Running
	I0914 22:10:26.931015   29206 system_pods.go:89] "kindnet-mmwd5" [4f33c106-87c4-42d3-b6ae-eb325637540e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0914 22:10:26.931025   29206 system_pods.go:89] "kindnet-vjv8m" [d5b0f0e4-3bb0-4e77-8a6f-7b350a511f5a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0914 22:10:26.931034   29206 system_pods.go:89] "kube-apiserver-multinode-124911" [e9a93d33-82f3-4cfe-9b2c-92560dd09d09] Running
	I0914 22:10:26.931041   29206 system_pods.go:89] "kube-controller-manager-multinode-124911" [3efae123-9cdd-457a-a317-77370a6c5288] Running
	I0914 22:10:26.931048   29206 system_pods.go:89] "kube-proxy-2kd4p" [de9e2ee3-364a-447b-9d7f-be85d86838ae] Running
	I0914 22:10:26.931054   29206 system_pods.go:89] "kube-proxy-5tcff" [bfc8d22f-954e-4a49-878e-9d1711d49c40] Running
	I0914 22:10:26.931061   29206 system_pods.go:89] "kube-proxy-c4qjg" [8214b42e-6656-4e01-bc47-82d6ab6592e5] Running
	I0914 22:10:26.931067   29206 system_pods.go:89] "kube-scheduler-multinode-124911" [f8d502b7-9ee7-474e-ab64-9f721ee6970e] Running
	I0914 22:10:26.931074   29206 system_pods.go:89] "storage-provisioner" [aada9d30-e15d-4405-a7e2-e979dd4b8e0d] Running
	I0914 22:10:26.931082   29206 system_pods.go:126] duration metric: took 206.212723ms to wait for k8s-apps to be running ...
	I0914 22:10:26.931092   29206 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:10:26.931142   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:10:26.951365   29206 system_svc.go:56] duration metric: took 20.26866ms WaitForService to wait for kubelet.
	I0914 22:10:26.951382   29206 kubeadm.go:581] duration metric: took 14.736376162s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:10:26.951398   29206 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:10:27.121842   29206 request.go:629] Waited for 170.362208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes
	I0914 22:10:27.121907   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes
	I0914 22:10:27.121915   29206 round_trippers.go:469] Request Headers:
	I0914 22:10:27.121926   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:10:27.121940   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:10:27.124811   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:10:27.124834   29206 round_trippers.go:577] Response Headers:
	I0914 22:10:27.124844   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:10:27.124852   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:10:27.124860   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:10:27.124872   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:10:27 GMT
	I0914 22:10:27.124880   29206 round_trippers.go:580]     Audit-Id: 461aaf17-36ca-4ec3-be6c-fcb0a898b9c7
	I0914 22:10:27.124892   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:10:27.125095   29206 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"895"},"items":[{"metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"867","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15076 chars]
	I0914 22:10:27.125663   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:10:27.125682   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:10:27.125691   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:10:27.125695   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:10:27.125704   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:10:27.125718   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:10:27.125724   29206 node_conditions.go:105] duration metric: took 174.321405ms to run NodePressure ...
	I0914 22:10:27.125744   29206 start.go:228] waiting for startup goroutines ...
	I0914 22:10:27.125753   29206 start.go:233] waiting for cluster config update ...
	I0914 22:10:27.125759   29206 start.go:242] writing updated cluster config ...
	I0914 22:10:27.126167   29206 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:10:27.126261   29206 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 22:10:27.128403   29206 out.go:177] * Starting worker node multinode-124911-m02 in cluster multinode-124911
	I0914 22:10:27.130231   29206 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:10:27.130259   29206 cache.go:57] Caching tarball of preloaded images
	I0914 22:10:27.130364   29206 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 22:10:27.130378   29206 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 22:10:27.130500   29206 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 22:10:27.130741   29206 start.go:365] acquiring machines lock for multinode-124911-m02: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:10:27.130796   29206 start.go:369] acquired machines lock for "multinode-124911-m02" in 31.803µs
	I0914 22:10:27.130817   29206 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:10:27.130828   29206 fix.go:54] fixHost starting: m02
	I0914 22:10:27.131148   29206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:10:27.131175   29206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:10:27.145924   29206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0914 22:10:27.146348   29206 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:10:27.146845   29206 main.go:141] libmachine: Using API Version  1
	I0914 22:10:27.146873   29206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:10:27.147170   29206 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:10:27.147396   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:10:27.147565   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetState
	I0914 22:10:27.149288   29206 fix.go:102] recreateIfNeeded on multinode-124911-m02: state=Running err=<nil>
	W0914 22:10:27.149307   29206 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:10:27.150880   29206 out.go:177] * Updating the running kvm2 "multinode-124911-m02" VM ...
	I0914 22:10:27.152329   29206 machine.go:88] provisioning docker machine ...
	I0914 22:10:27.152355   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:10:27.152596   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetMachineName
	I0914 22:10:27.152772   29206 buildroot.go:166] provisioning hostname "multinode-124911-m02"
	I0914 22:10:27.152795   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetMachineName
	I0914 22:10:27.152986   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:10:27.155496   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.155913   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:10:27.155946   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.156070   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:10:27.156234   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:10:27.156387   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:10:27.156521   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:10:27.156676   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:10:27.156986   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0914 22:10:27.156997   29206 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-124911-m02 && echo "multinode-124911-m02" | sudo tee /etc/hostname
	I0914 22:10:27.288129   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-124911-m02
	
	I0914 22:10:27.288153   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:10:27.290979   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.291389   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:10:27.291413   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.291644   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:10:27.291835   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:10:27.291976   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:10:27.292115   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:10:27.292304   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:10:27.292627   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0914 22:10:27.292656   29206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-124911-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-124911-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-124911-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:10:27.403983   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:10:27.404012   29206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:10:27.404029   29206 buildroot.go:174] setting up certificates
	I0914 22:10:27.404037   29206 provision.go:83] configureAuth start
	I0914 22:10:27.404045   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetMachineName
	I0914 22:10:27.404299   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetIP
	I0914 22:10:27.406874   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.407222   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:10:27.407263   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.407393   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:10:27.409747   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.410111   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:10:27.410141   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.410271   29206 provision.go:138] copyHostCerts
	I0914 22:10:27.410297   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:10:27.410325   29206 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:10:27.410334   29206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:10:27.410399   29206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:10:27.410480   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:10:27.410497   29206 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:10:27.410503   29206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:10:27.410528   29206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:10:27.410571   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:10:27.410588   29206 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:10:27.410594   29206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:10:27.410614   29206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:10:27.410656   29206 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.multinode-124911-m02 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube multinode-124911-m02]
	I0914 22:10:27.501807   29206 provision.go:172] copyRemoteCerts
	I0914 22:10:27.501853   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:10:27.501872   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:10:27.504323   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.504664   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:10:27.504703   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.504847   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:10:27.505038   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:10:27.505198   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:10:27.505313   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa Username:docker}
	I0914 22:10:27.588490   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 22:10:27.588544   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:10:27.610460   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 22:10:27.610521   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0914 22:10:27.631858   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 22:10:27.631913   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:10:27.653025   29206 provision.go:86] duration metric: configureAuth took 248.978698ms
	I0914 22:10:27.653045   29206 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:10:27.653280   29206 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:10:27.653363   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:10:27.655972   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.656387   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:10:27.656415   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:10:27.656629   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:10:27.656823   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:10:27.656982   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:10:27.657122   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:10:27.657299   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:10:27.657687   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0914 22:10:27.657706   29206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:11:58.211607   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:11:58.211636   29206 machine.go:91] provisioned docker machine in 1m31.059288848s
	I0914 22:11:58.211649   29206 start.go:300] post-start starting for "multinode-124911-m02" (driver="kvm2")
	I0914 22:11:58.211663   29206 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:11:58.211734   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:11:58.212060   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:11:58.212096   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:11:58.214789   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:58.215162   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:11:58.215195   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:58.215377   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:11:58.215582   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:11:58.215730   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:11:58.215870   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa Username:docker}
	I0914 22:11:58.301295   29206 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:11:58.304902   29206 command_runner.go:130] > NAME=Buildroot
	I0914 22:11:58.304925   29206 command_runner.go:130] > VERSION=2021.02.12-1-g52d8811-dirty
	I0914 22:11:58.304932   29206 command_runner.go:130] > ID=buildroot
	I0914 22:11:58.304940   29206 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 22:11:58.304948   29206 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 22:11:58.304982   29206 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:11:58.304996   29206 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:11:58.305068   29206 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:11:58.305156   29206 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:11:58.305169   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /etc/ssl/certs/134852.pem
	I0914 22:11:58.305271   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:11:58.313175   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:11:58.337239   29206 start.go:303] post-start completed in 125.574455ms
	I0914 22:11:58.337260   29206 fix.go:56] fixHost completed within 1m31.206432706s
	I0914 22:11:58.337279   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:11:58.339752   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:58.340122   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:11:58.340157   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:58.340274   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:11:58.340456   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:11:58.340624   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:11:58.340832   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:11:58.341002   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:11:58.341300   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I0914 22:11:58.341311   29206 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:11:58.451881   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694729518.440998440
	
	I0914 22:11:58.451902   29206 fix.go:206] guest clock: 1694729518.440998440
	I0914 22:11:58.451909   29206 fix.go:219] Guest: 2023-09-14 22:11:58.44099844 +0000 UTC Remote: 2023-09-14 22:11:58.337264236 +0000 UTC m=+452.072081517 (delta=103.734204ms)
	I0914 22:11:58.451923   29206 fix.go:190] guest clock delta is within tolerance: 103.734204ms
	I0914 22:11:58.451927   29206 start.go:83] releasing machines lock for "multinode-124911-m02", held for 1m31.321118072s
	I0914 22:11:58.451952   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:11:58.452194   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetIP
	I0914 22:11:58.454968   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:58.455376   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:11:58.455408   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:58.457645   29206 out.go:177] * Found network options:
	I0914 22:11:58.459288   29206 out.go:177]   - NO_PROXY=192.168.39.116
	W0914 22:11:58.460885   29206 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 22:11:58.460925   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:11:58.461484   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:11:58.461657   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:11:58.461748   29206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:11:58.461797   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	W0914 22:11:58.461853   29206 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 22:11:58.461930   29206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:11:58.461954   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:11:58.464598   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:58.464832   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:58.465001   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:11:58.465033   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:58.465169   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:11:58.465300   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:11:58.465334   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:11:58.465335   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:58.465407   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:11:58.465560   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:11:58.465569   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:11:58.465745   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa Username:docker}
	I0914 22:11:58.465756   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:11:58.465911   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa Username:docker}
	I0914 22:11:58.578084   29206 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 22:11:58.693401   29206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 22:11:58.699064   29206 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 22:11:58.699177   29206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:11:58.699241   29206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:11:58.707152   29206 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 22:11:58.707171   29206 start.go:469] detecting cgroup driver to use...
	I0914 22:11:58.707226   29206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:11:58.719849   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:11:58.731164   29206 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:11:58.731215   29206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:11:58.742932   29206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:11:58.754172   29206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:11:58.900361   29206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:11:59.036159   29206 docker.go:212] disabling docker service ...
	I0914 22:11:59.036219   29206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:11:59.051032   29206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:11:59.063567   29206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:11:59.200069   29206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:11:59.337067   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:11:59.349857   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:11:59.366225   29206 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 22:11:59.366539   29206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:11:59.366599   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:11:59.376481   29206 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:11:59.376540   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:11:59.385626   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:11:59.394748   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:11:59.404125   29206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:11:59.413465   29206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:11:59.421922   29206 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 22:11:59.421979   29206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:11:59.430140   29206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:11:59.560532   29206 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:11:59.773989   29206 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:11:59.774063   29206 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:11:59.779393   29206 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 22:11:59.779412   29206 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 22:11:59.779422   29206 command_runner.go:130] > Device: 16h/22d	Inode: 1235        Links: 1
	I0914 22:11:59.779432   29206 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:11:59.779441   29206 command_runner.go:130] > Access: 2023-09-14 22:11:59.696982646 +0000
	I0914 22:11:59.779492   29206 command_runner.go:130] > Modify: 2023-09-14 22:11:59.696982646 +0000
	I0914 22:11:59.779509   29206 command_runner.go:130] > Change: 2023-09-14 22:11:59.696982646 +0000
	I0914 22:11:59.779515   29206 command_runner.go:130] >  Birth: -
	I0914 22:11:59.779879   29206 start.go:537] Will wait 60s for crictl version
	I0914 22:11:59.779937   29206 ssh_runner.go:195] Run: which crictl
	I0914 22:11:59.783262   29206 command_runner.go:130] > /usr/bin/crictl
	I0914 22:11:59.783320   29206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:11:59.816120   29206 command_runner.go:130] > Version:  0.1.0
	I0914 22:11:59.816143   29206 command_runner.go:130] > RuntimeName:  cri-o
	I0914 22:11:59.816147   29206 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0914 22:11:59.816153   29206 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0914 22:11:59.817356   29206 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:11:59.817431   29206 ssh_runner.go:195] Run: crio --version
	I0914 22:11:59.868746   29206 command_runner.go:130] > crio version 1.24.1
	I0914 22:11:59.868766   29206 command_runner.go:130] > Version:          1.24.1
	I0914 22:11:59.868773   29206 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0914 22:11:59.868778   29206 command_runner.go:130] > GitTreeState:     dirty
	I0914 22:11:59.868784   29206 command_runner.go:130] > BuildDate:        2023-09-13T22:47:54Z
	I0914 22:11:59.868793   29206 command_runner.go:130] > GoVersion:        go1.19.9
	I0914 22:11:59.868797   29206 command_runner.go:130] > Compiler:         gc
	I0914 22:11:59.868802   29206 command_runner.go:130] > Platform:         linux/amd64
	I0914 22:11:59.868809   29206 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:11:59.868820   29206 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:11:59.868827   29206 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:11:59.868837   29206 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:11:59.870187   29206 ssh_runner.go:195] Run: crio --version
	I0914 22:11:59.912936   29206 command_runner.go:130] > crio version 1.24.1
	I0914 22:11:59.912968   29206 command_runner.go:130] > Version:          1.24.1
	I0914 22:11:59.912979   29206 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0914 22:11:59.912987   29206 command_runner.go:130] > GitTreeState:     dirty
	I0914 22:11:59.913001   29206 command_runner.go:130] > BuildDate:        2023-09-13T22:47:54Z
	I0914 22:11:59.913006   29206 command_runner.go:130] > GoVersion:        go1.19.9
	I0914 22:11:59.913011   29206 command_runner.go:130] > Compiler:         gc
	I0914 22:11:59.913019   29206 command_runner.go:130] > Platform:         linux/amd64
	I0914 22:11:59.913028   29206 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:11:59.913047   29206 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:11:59.913054   29206 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:11:59.913060   29206 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:11:59.915138   29206 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:11:59.916666   29206 out.go:177]   - env NO_PROXY=192.168.39.116
	I0914 22:11:59.918002   29206 main.go:141] libmachine: (multinode-124911-m02) Calling .GetIP
	I0914 22:11:59.920864   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:59.921208   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:11:59.921240   29206 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:11:59.921406   29206 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:11:59.925375   29206 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0914 22:11:59.925415   29206 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911 for IP: 192.168.39.254
	I0914 22:11:59.925429   29206 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:11:59.925554   29206 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:11:59.925594   29206 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:11:59.925603   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 22:11:59.925618   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 22:11:59.925629   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 22:11:59.925644   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 22:11:59.925688   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:11:59.925972   29206 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:11:59.925991   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:11:59.926029   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:11:59.926075   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:11:59.926109   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:11:59.926177   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:11:59.926210   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /usr/share/ca-certificates/134852.pem
	I0914 22:11:59.926226   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:11:59.926241   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem -> /usr/share/ca-certificates/13485.pem
	I0914 22:11:59.926913   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:11:59.951404   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:11:59.973422   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:11:59.995549   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:12:00.016034   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:12:00.037329   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:12:00.058273   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:12:00.080549   29206 ssh_runner.go:195] Run: openssl version
	I0914 22:12:00.085534   29206 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0914 22:12:00.085884   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:12:00.096471   29206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:12:00.100938   29206 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:12:00.100973   29206 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:12:00.101013   29206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:12:00.106303   29206 command_runner.go:130] > 3ec20f2e
	I0914 22:12:00.106415   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:12:00.114931   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:12:00.124924   29206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:12:00.129026   29206 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:12:00.129089   29206 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:12:00.129131   29206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:12:00.134243   29206 command_runner.go:130] > b5213941
	I0914 22:12:00.134392   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:12:00.142810   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:12:00.152293   29206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:12:00.156225   29206 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:12:00.156264   29206 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:12:00.156303   29206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:12:00.161184   29206 command_runner.go:130] > 51391683
	I0914 22:12:00.161265   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:12:00.170736   29206 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:12:00.174205   29206 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:12:00.174425   29206 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:12:00.174544   29206 ssh_runner.go:195] Run: crio config
	I0914 22:12:00.227449   29206 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 22:12:00.227500   29206 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 22:12:00.227513   29206 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 22:12:00.227520   29206 command_runner.go:130] > #
	I0914 22:12:00.227533   29206 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 22:12:00.227545   29206 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 22:12:00.227559   29206 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 22:12:00.227572   29206 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 22:12:00.227583   29206 command_runner.go:130] > # reload'.
	I0914 22:12:00.227593   29206 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 22:12:00.227606   29206 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 22:12:00.227618   29206 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 22:12:00.227631   29206 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 22:12:00.227640   29206 command_runner.go:130] > [crio]
	I0914 22:12:00.227650   29206 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 22:12:00.227661   29206 command_runner.go:130] > # containers images, in this directory.
	I0914 22:12:00.227669   29206 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0914 22:12:00.227692   29206 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 22:12:00.227704   29206 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0914 22:12:00.227718   29206 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 22:12:00.227731   29206 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 22:12:00.227741   29206 command_runner.go:130] > storage_driver = "overlay"
	I0914 22:12:00.227751   29206 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 22:12:00.227763   29206 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 22:12:00.227774   29206 command_runner.go:130] > storage_option = [
	I0914 22:12:00.227781   29206 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0914 22:12:00.227790   29206 command_runner.go:130] > ]
	I0914 22:12:00.227801   29206 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 22:12:00.227814   29206 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 22:12:00.227825   29206 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 22:12:00.227837   29206 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 22:12:00.227850   29206 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 22:12:00.227860   29206 command_runner.go:130] > # always happen on a node reboot
	I0914 22:12:00.227868   29206 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 22:12:00.227881   29206 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 22:12:00.227894   29206 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 22:12:00.227924   29206 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 22:12:00.227938   29206 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0914 22:12:00.227950   29206 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 22:12:00.227966   29206 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 22:12:00.227978   29206 command_runner.go:130] > # internal_wipe = true
	I0914 22:12:00.227990   29206 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 22:12:00.228003   29206 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 22:12:00.228015   29206 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 22:12:00.228023   29206 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 22:12:00.228033   29206 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 22:12:00.228043   29206 command_runner.go:130] > [crio.api]
	I0914 22:12:00.228054   29206 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 22:12:00.228066   29206 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 22:12:00.228077   29206 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 22:12:00.228089   29206 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 22:12:00.228102   29206 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 22:12:00.228114   29206 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 22:12:00.228168   29206 command_runner.go:130] > # stream_port = "0"
	I0914 22:12:00.228184   29206 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 22:12:00.228191   29206 command_runner.go:130] > # stream_enable_tls = false
	I0914 22:12:00.228201   29206 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 22:12:00.228212   29206 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 22:12:00.228226   29206 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 22:12:00.228247   29206 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 22:12:00.228257   29206 command_runner.go:130] > # minutes.
	I0914 22:12:00.228264   29206 command_runner.go:130] > # stream_tls_cert = ""
	I0914 22:12:00.228277   29206 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 22:12:00.228290   29206 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 22:12:00.228301   29206 command_runner.go:130] > # stream_tls_key = ""
	I0914 22:12:00.228316   29206 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 22:12:00.228329   29206 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 22:12:00.228339   29206 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 22:12:00.228349   29206 command_runner.go:130] > # stream_tls_ca = ""
	I0914 22:12:00.228362   29206 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:12:00.228372   29206 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0914 22:12:00.228392   29206 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:12:00.228405   29206 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0914 22:12:00.228447   29206 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 22:12:00.228467   29206 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 22:12:00.228477   29206 command_runner.go:130] > [crio.runtime]
	I0914 22:12:00.228487   29206 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 22:12:00.228500   29206 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 22:12:00.228507   29206 command_runner.go:130] > # "nofile=1024:2048"
	I0914 22:12:00.228522   29206 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 22:12:00.228532   29206 command_runner.go:130] > # default_ulimits = [
	I0914 22:12:00.228542   29206 command_runner.go:130] > # ]
	I0914 22:12:00.228553   29206 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 22:12:00.228563   29206 command_runner.go:130] > # no_pivot = false
	I0914 22:12:00.228576   29206 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 22:12:00.228589   29206 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 22:12:00.228600   29206 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 22:12:00.228609   29206 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 22:12:00.228621   29206 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 22:12:00.228638   29206 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:12:00.228650   29206 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0914 22:12:00.228660   29206 command_runner.go:130] > # Cgroup setting for conmon
	I0914 22:12:00.228672   29206 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 22:12:00.228682   29206 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 22:12:00.228692   29206 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 22:12:00.228703   29206 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 22:12:00.228713   29206 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:12:00.228723   29206 command_runner.go:130] > conmon_env = [
	I0914 22:12:00.228733   29206 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 22:12:00.228742   29206 command_runner.go:130] > ]
	I0914 22:12:00.228750   29206 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 22:12:00.228759   29206 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 22:12:00.228773   29206 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 22:12:00.228783   29206 command_runner.go:130] > # default_env = [
	I0914 22:12:00.228790   29206 command_runner.go:130] > # ]
	I0914 22:12:00.228802   29206 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 22:12:00.228813   29206 command_runner.go:130] > # selinux = false
	I0914 22:12:00.228835   29206 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 22:12:00.228850   29206 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 22:12:00.228864   29206 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 22:12:00.228874   29206 command_runner.go:130] > # seccomp_profile = ""
	I0914 22:12:00.228887   29206 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 22:12:00.228899   29206 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 22:12:00.228912   29206 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 22:12:00.228922   29206 command_runner.go:130] > # which might increase security.
	I0914 22:12:00.228968   29206 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0914 22:12:00.228985   29206 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 22:12:00.229000   29206 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 22:12:00.229013   29206 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 22:12:00.229028   29206 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 22:12:00.229041   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:12:00.229052   29206 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 22:12:00.229065   29206 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 22:12:00.229077   29206 command_runner.go:130] > # the cgroup blockio controller.
	I0914 22:12:00.229085   29206 command_runner.go:130] > # blockio_config_file = ""
	I0914 22:12:00.229104   29206 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 22:12:00.229115   29206 command_runner.go:130] > # irqbalance daemon.
	I0914 22:12:00.229126   29206 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 22:12:00.229139   29206 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 22:12:00.229151   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:12:00.229161   29206 command_runner.go:130] > # rdt_config_file = ""
	I0914 22:12:00.229169   29206 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 22:12:00.229180   29206 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 22:12:00.229189   29206 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 22:12:00.229199   29206 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 22:12:00.229210   29206 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 22:12:00.229223   29206 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 22:12:00.229233   29206 command_runner.go:130] > # will be added.
	I0914 22:12:00.229240   29206 command_runner.go:130] > # default_capabilities = [
	I0914 22:12:00.229250   29206 command_runner.go:130] > # 	"CHOWN",
	I0914 22:12:00.229256   29206 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 22:12:00.229266   29206 command_runner.go:130] > # 	"FSETID",
	I0914 22:12:00.229272   29206 command_runner.go:130] > # 	"FOWNER",
	I0914 22:12:00.229289   29206 command_runner.go:130] > # 	"SETGID",
	I0914 22:12:00.229306   29206 command_runner.go:130] > # 	"SETUID",
	I0914 22:12:00.229313   29206 command_runner.go:130] > # 	"SETPCAP",
	I0914 22:12:00.229320   29206 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 22:12:00.229331   29206 command_runner.go:130] > # 	"KILL",
	I0914 22:12:00.229341   29206 command_runner.go:130] > # ]
	I0914 22:12:00.229352   29206 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 22:12:00.229365   29206 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:12:00.229376   29206 command_runner.go:130] > # default_sysctls = [
	I0914 22:12:00.229384   29206 command_runner.go:130] > # ]
	I0914 22:12:00.229397   29206 command_runner.go:130] > # List of devices on the host that a
	I0914 22:12:00.229411   29206 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 22:12:00.229420   29206 command_runner.go:130] > # allowed_devices = [
	I0914 22:12:00.229427   29206 command_runner.go:130] > # 	"/dev/fuse",
	I0914 22:12:00.229436   29206 command_runner.go:130] > # ]
	I0914 22:12:00.229445   29206 command_runner.go:130] > # List of additional devices. specified as
	I0914 22:12:00.229466   29206 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 22:12:00.229478   29206 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 22:12:00.229529   29206 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:12:00.229541   29206 command_runner.go:130] > # additional_devices = [
	I0914 22:12:00.229547   29206 command_runner.go:130] > # ]
	I0914 22:12:00.229555   29206 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 22:12:00.229565   29206 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 22:12:00.229606   29206 command_runner.go:130] > # 	"/etc/cdi",
	I0914 22:12:00.229616   29206 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 22:12:00.229622   29206 command_runner.go:130] > # ]
	I0914 22:12:00.229635   29206 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 22:12:00.229645   29206 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 22:12:00.229655   29206 command_runner.go:130] > # Defaults to false.
	I0914 22:12:00.229663   29206 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 22:12:00.229676   29206 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 22:12:00.229690   29206 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 22:12:00.229699   29206 command_runner.go:130] > # hooks_dir = [
	I0914 22:12:00.229710   29206 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 22:12:00.229719   29206 command_runner.go:130] > # ]
	I0914 22:12:00.229738   29206 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 22:12:00.229761   29206 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 22:12:00.229774   29206 command_runner.go:130] > # its default mounts from the following two files:
	I0914 22:12:00.229787   29206 command_runner.go:130] > #
	I0914 22:12:00.229801   29206 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 22:12:00.229815   29206 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 22:12:00.229828   29206 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 22:12:00.229837   29206 command_runner.go:130] > #
	I0914 22:12:00.229846   29206 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 22:12:00.229860   29206 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 22:12:00.229870   29206 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 22:12:00.229889   29206 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 22:12:00.229895   29206 command_runner.go:130] > #
	I0914 22:12:00.229901   29206 command_runner.go:130] > # default_mounts_file = ""
	I0914 22:12:00.229910   29206 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 22:12:00.229921   29206 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 22:12:00.229933   29206 command_runner.go:130] > pids_limit = 1024
	I0914 22:12:00.229942   29206 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 22:12:00.229956   29206 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 22:12:00.229971   29206 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 22:12:00.229994   29206 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 22:12:00.230005   29206 command_runner.go:130] > # log_size_max = -1
	I0914 22:12:00.230019   29206 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0914 22:12:00.230029   29206 command_runner.go:130] > # log_to_journald = false
	I0914 22:12:00.230040   29206 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 22:12:00.230053   29206 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 22:12:00.230064   29206 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 22:12:00.230073   29206 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 22:12:00.230081   29206 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 22:12:00.230092   29206 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 22:12:00.230100   29206 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 22:12:00.230110   29206 command_runner.go:130] > # read_only = false
	I0914 22:12:00.230120   29206 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 22:12:00.230133   29206 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 22:12:00.230141   29206 command_runner.go:130] > # live configuration reload.
	I0914 22:12:00.230151   29206 command_runner.go:130] > # log_level = "info"
	I0914 22:12:00.230160   29206 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 22:12:00.230174   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:12:00.230184   29206 command_runner.go:130] > # log_filter = ""
	I0914 22:12:00.230195   29206 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 22:12:00.230208   29206 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 22:12:00.230219   29206 command_runner.go:130] > # separated by comma.
	I0914 22:12:00.230226   29206 command_runner.go:130] > # uid_mappings = ""
	I0914 22:12:00.230237   29206 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 22:12:00.230252   29206 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 22:12:00.230263   29206 command_runner.go:130] > # separated by comma.
	I0914 22:12:00.230274   29206 command_runner.go:130] > # gid_mappings = ""
	I0914 22:12:00.230289   29206 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 22:12:00.230304   29206 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:12:00.230321   29206 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:12:00.230329   29206 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 22:12:00.230338   29206 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 22:12:00.230348   29206 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:12:00.230364   29206 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:12:00.230375   29206 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 22:12:00.230390   29206 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 22:12:00.230402   29206 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 22:12:00.230415   29206 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 22:12:00.230426   29206 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 22:12:00.230439   29206 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 22:12:00.230448   29206 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 22:12:00.230479   29206 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 22:12:00.230488   29206 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 22:12:00.230498   29206 command_runner.go:130] > drop_infra_ctr = false
	I0914 22:12:00.230508   29206 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 22:12:00.230556   29206 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 22:12:00.230572   29206 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 22:12:00.230580   29206 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 22:12:00.230593   29206 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 22:12:00.230604   29206 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 22:12:00.230612   29206 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 22:12:00.230625   29206 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 22:12:00.230636   29206 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0914 22:12:00.230656   29206 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 22:12:00.230669   29206 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0914 22:12:00.230681   29206 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0914 22:12:00.230689   29206 command_runner.go:130] > # default_runtime = "runc"
	I0914 22:12:00.230701   29206 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 22:12:00.230715   29206 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 22:12:00.230738   29206 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0914 22:12:00.230750   29206 command_runner.go:130] > # creation as a file is not desired either.
	I0914 22:12:00.230763   29206 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 22:12:00.230774   29206 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 22:12:00.230782   29206 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 22:12:00.230791   29206 command_runner.go:130] > # ]
	I0914 22:12:00.230802   29206 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 22:12:00.230814   29206 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 22:12:00.230828   29206 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0914 22:12:00.230841   29206 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0914 22:12:00.230850   29206 command_runner.go:130] > #
	I0914 22:12:00.230858   29206 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0914 22:12:00.230872   29206 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0914 22:12:00.230883   29206 command_runner.go:130] > #  runtime_type = "oci"
	I0914 22:12:00.230893   29206 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0914 22:12:00.230903   29206 command_runner.go:130] > #  privileged_without_host_devices = false
	I0914 22:12:00.230913   29206 command_runner.go:130] > #  allowed_annotations = []
	I0914 22:12:00.230919   29206 command_runner.go:130] > # Where:
	I0914 22:12:00.230946   29206 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0914 22:12:00.230955   29206 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0914 22:12:00.230962   29206 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 22:12:00.230968   29206 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 22:12:00.230972   29206 command_runner.go:130] > #   in $PATH.
	I0914 22:12:00.230979   29206 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0914 22:12:00.230986   29206 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 22:12:00.230993   29206 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0914 22:12:00.230999   29206 command_runner.go:130] > #   state.
	I0914 22:12:00.231005   29206 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 22:12:00.231013   29206 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 22:12:00.231020   29206 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 22:12:00.231031   29206 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 22:12:00.231040   29206 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 22:12:00.231049   29206 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 22:12:00.231056   29206 command_runner.go:130] > #   The currently recognized values are:
	I0914 22:12:00.231063   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 22:12:00.231072   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 22:12:00.231080   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 22:12:00.231088   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 22:12:00.231098   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 22:12:00.231107   29206 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 22:12:00.231115   29206 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 22:12:00.231124   29206 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0914 22:12:00.231131   29206 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 22:12:00.231136   29206 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 22:12:00.231142   29206 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0914 22:12:00.231147   29206 command_runner.go:130] > runtime_type = "oci"
	I0914 22:12:00.231153   29206 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 22:12:00.231158   29206 command_runner.go:130] > runtime_config_path = ""
	I0914 22:12:00.231169   29206 command_runner.go:130] > monitor_path = ""
	I0914 22:12:00.231175   29206 command_runner.go:130] > monitor_cgroup = ""
	I0914 22:12:00.231180   29206 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 22:12:00.231188   29206 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0914 22:12:00.231194   29206 command_runner.go:130] > # running containers
	I0914 22:12:00.231199   29206 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0914 22:12:00.231207   29206 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0914 22:12:00.231255   29206 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0914 22:12:00.231286   29206 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0914 22:12:00.231294   29206 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0914 22:12:00.231299   29206 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0914 22:12:00.231306   29206 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0914 22:12:00.231310   29206 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0914 22:12:00.231317   29206 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0914 22:12:00.231322   29206 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0914 22:12:00.231331   29206 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 22:12:00.231338   29206 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 22:12:00.231347   29206 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 22:12:00.231360   29206 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 22:12:00.231370   29206 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 22:12:00.231378   29206 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 22:12:00.231388   29206 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 22:12:00.231402   29206 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 22:12:00.231414   29206 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 22:12:00.231429   29206 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 22:12:00.231444   29206 command_runner.go:130] > # Example:
	I0914 22:12:00.231452   29206 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 22:12:00.231486   29206 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 22:12:00.231499   29206 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 22:12:00.231509   29206 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 22:12:00.231516   29206 command_runner.go:130] > # cpuset = 0
	I0914 22:12:00.231520   29206 command_runner.go:130] > # cpushares = "0-1"
	I0914 22:12:00.231526   29206 command_runner.go:130] > # Where:
	I0914 22:12:00.231531   29206 command_runner.go:130] > # The workload name is workload-type.
	I0914 22:12:00.231541   29206 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 22:12:00.231548   29206 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 22:12:00.231560   29206 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 22:12:00.231571   29206 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 22:12:00.231584   29206 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 22:12:00.231593   29206 command_runner.go:130] > # 
	I0914 22:12:00.231606   29206 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 22:12:00.231615   29206 command_runner.go:130] > #
	I0914 22:12:00.231624   29206 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 22:12:00.231637   29206 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 22:12:00.231650   29206 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 22:12:00.231664   29206 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 22:12:00.231677   29206 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 22:12:00.231687   29206 command_runner.go:130] > [crio.image]
	I0914 22:12:00.231698   29206 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 22:12:00.231705   29206 command_runner.go:130] > # default_transport = "docker://"
	I0914 22:12:00.231711   29206 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 22:12:00.231720   29206 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:12:00.231726   29206 command_runner.go:130] > # global_auth_file = ""
	I0914 22:12:00.231732   29206 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 22:12:00.231744   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:12:00.231752   29206 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0914 22:12:00.231759   29206 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 22:12:00.231767   29206 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:12:00.231776   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:12:00.231783   29206 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 22:12:00.231790   29206 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 22:12:00.231798   29206 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 22:12:00.231807   29206 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 22:12:00.231815   29206 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 22:12:00.231821   29206 command_runner.go:130] > # pause_command = "/pause"
	I0914 22:12:00.231828   29206 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 22:12:00.231836   29206 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 22:12:00.231845   29206 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 22:12:00.231853   29206 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 22:12:00.231858   29206 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 22:12:00.231865   29206 command_runner.go:130] > # signature_policy = ""
	I0914 22:12:00.231871   29206 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 22:12:00.231881   29206 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 22:12:00.231888   29206 command_runner.go:130] > # changing them here.
	I0914 22:12:00.231892   29206 command_runner.go:130] > # insecure_registries = [
	I0914 22:12:00.231898   29206 command_runner.go:130] > # ]
	I0914 22:12:00.231904   29206 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 22:12:00.231912   29206 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 22:12:00.231940   29206 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 22:12:00.231948   29206 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 22:12:00.231952   29206 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 22:12:00.231961   29206 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 22:12:00.231967   29206 command_runner.go:130] > # CNI plugins.
	I0914 22:12:00.231971   29206 command_runner.go:130] > [crio.network]
	I0914 22:12:00.231980   29206 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 22:12:00.231987   29206 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 22:12:00.231992   29206 command_runner.go:130] > # cni_default_network = ""
	I0914 22:12:00.232000   29206 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 22:12:00.232007   29206 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 22:12:00.232017   29206 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 22:12:00.232026   29206 command_runner.go:130] > # plugin_dirs = [
	I0914 22:12:00.232036   29206 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 22:12:00.232045   29206 command_runner.go:130] > # ]
	I0914 22:12:00.232055   29206 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 22:12:00.232064   29206 command_runner.go:130] > [crio.metrics]
	I0914 22:12:00.232072   29206 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 22:12:00.232081   29206 command_runner.go:130] > enable_metrics = true
	I0914 22:12:00.232089   29206 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 22:12:00.232097   29206 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 22:12:00.232104   29206 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 22:12:00.232112   29206 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 22:12:00.232118   29206 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 22:12:00.232125   29206 command_runner.go:130] > # metrics_collectors = [
	I0914 22:12:00.232129   29206 command_runner.go:130] > # 	"operations",
	I0914 22:12:00.232136   29206 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 22:12:00.232141   29206 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 22:12:00.232148   29206 command_runner.go:130] > # 	"operations_errors",
	I0914 22:12:00.232152   29206 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 22:12:00.232161   29206 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 22:12:00.232166   29206 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 22:12:00.232170   29206 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 22:12:00.232175   29206 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 22:12:00.232182   29206 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 22:12:00.232186   29206 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 22:12:00.232190   29206 command_runner.go:130] > # 	"containers_oom_total",
	I0914 22:12:00.232197   29206 command_runner.go:130] > # 	"containers_oom",
	I0914 22:12:00.232202   29206 command_runner.go:130] > # 	"processes_defunct",
	I0914 22:12:00.232208   29206 command_runner.go:130] > # 	"operations_total",
	I0914 22:12:00.232212   29206 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 22:12:00.232219   29206 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 22:12:00.232224   29206 command_runner.go:130] > # 	"operations_errors_total",
	I0914 22:12:00.232230   29206 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 22:12:00.232238   29206 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 22:12:00.232248   29206 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 22:12:00.232259   29206 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 22:12:00.232269   29206 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 22:12:00.232280   29206 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 22:12:00.232290   29206 command_runner.go:130] > # ]
	I0914 22:12:00.232301   29206 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 22:12:00.232308   29206 command_runner.go:130] > # metrics_port = 9090
	I0914 22:12:00.232313   29206 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 22:12:00.232320   29206 command_runner.go:130] > # metrics_socket = ""
	I0914 22:12:00.232325   29206 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 22:12:00.232333   29206 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 22:12:00.232342   29206 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 22:12:00.232347   29206 command_runner.go:130] > # certificate on any modification event.
	I0914 22:12:00.232354   29206 command_runner.go:130] > # metrics_cert = ""
	I0914 22:12:00.232359   29206 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 22:12:00.232367   29206 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 22:12:00.232371   29206 command_runner.go:130] > # metrics_key = ""
	I0914 22:12:00.232377   29206 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 22:12:00.232383   29206 command_runner.go:130] > [crio.tracing]
	I0914 22:12:00.232389   29206 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 22:12:00.232395   29206 command_runner.go:130] > # enable_tracing = false
	I0914 22:12:00.232403   29206 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 22:12:00.232410   29206 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 22:12:00.232415   29206 command_runner.go:130] > # Number of samples to collect per million spans.
	I0914 22:12:00.232421   29206 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 22:12:00.232427   29206 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 22:12:00.232433   29206 command_runner.go:130] > [crio.stats]
	I0914 22:12:00.232439   29206 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 22:12:00.232447   29206 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 22:12:00.232458   29206 command_runner.go:130] > # stats_collection_period = 0
	I0914 22:12:00.232614   29206 command_runner.go:130] ! time="2023-09-14 22:12:00.212553560Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0914 22:12:00.232640   29206 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 22:12:00.232741   29206 cni.go:84] Creating CNI manager for ""
	I0914 22:12:00.232753   29206 cni.go:136] 3 nodes found, recommending kindnet
	I0914 22:12:00.232765   29206 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:12:00.232792   29206 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-124911 NodeName:multinode-124911-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:12:00.232951   29206 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.254
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-124911-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:12:00.233009   29206 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-124911-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:12:00.233059   29206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:12:00.242214   29206 command_runner.go:130] > kubeadm
	I0914 22:12:00.242232   29206 command_runner.go:130] > kubectl
	I0914 22:12:00.242238   29206 command_runner.go:130] > kubelet
	I0914 22:12:00.242305   29206 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:12:00.242366   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0914 22:12:00.251097   29206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0914 22:12:00.266311   29206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:12:00.281191   29206 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0914 22:12:00.284674   29206 command_runner.go:130] > 192.168.39.116	control-plane.minikube.internal
	I0914 22:12:00.284866   29206 host.go:66] Checking if "multinode-124911" exists ...
	I0914 22:12:00.285203   29206 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:12:00.285348   29206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:12:00.285391   29206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:12:00.300023   29206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
	I0914 22:12:00.300463   29206 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:12:00.300916   29206 main.go:141] libmachine: Using API Version  1
	I0914 22:12:00.300939   29206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:12:00.301283   29206 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:12:00.301487   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:12:00.301638   29206 start.go:304] JoinCluster: &{Name:multinode-124911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:12:00.301778   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0914 22:12:00.301797   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:12:00.304793   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:12:00.305266   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:12:00.305307   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:12:00.305483   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:12:00.305667   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:12:00.305837   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:12:00.305956   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 22:12:00.484484   29206 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x7lse3.l3duenahr204pd74 --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:12:00.484535   29206 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0914 22:12:00.484576   29206 host.go:66] Checking if "multinode-124911" exists ...
	I0914 22:12:00.484998   29206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:12:00.485052   29206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:12:00.500709   29206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0914 22:12:00.501122   29206 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:12:00.501559   29206 main.go:141] libmachine: Using API Version  1
	I0914 22:12:00.501581   29206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:12:00.501898   29206 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:12:00.502071   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:12:00.502229   29206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-124911-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0914 22:12:00.502250   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:12:00.505101   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:12:00.505587   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:12:00.505615   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:12:00.505755   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:12:00.505934   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:12:00.506087   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:12:00.506242   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 22:12:00.662442   29206 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0914 22:12:00.725170   29206 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-mmwd5, kube-system/kube-proxy-c4qjg
	I0914 22:12:03.745495   29206 command_runner.go:130] > node/multinode-124911-m02 cordoned
	I0914 22:12:03.745529   29206 command_runner.go:130] > pod "busybox-5bc68d56bd-lv55w" has DeletionTimestamp older than 1 seconds, skipping
	I0914 22:12:03.745539   29206 command_runner.go:130] > node/multinode-124911-m02 drained
	I0914 22:12:03.745564   29206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-124911-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.243311349s)
	I0914 22:12:03.745581   29206 node.go:108] successfully drained node "m02"
	I0914 22:12:03.746037   29206 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:12:03.746259   29206 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:12:03.746579   29206 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0914 22:12:03.746641   29206 round_trippers.go:463] DELETE https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:12:03.746650   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:03.746658   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:03.746664   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:03.746673   29206 round_trippers.go:473]     Content-Type: application/json
	I0914 22:12:03.757835   29206 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0914 22:12:03.757858   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:03.757868   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:03.757876   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:03.757884   29206 round_trippers.go:580]     Content-Length: 171
	I0914 22:12:03.757891   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:03 GMT
	I0914 22:12:03.757898   29206 round_trippers.go:580]     Audit-Id: 667ec9a8-2461-4a05-a1f3-6b3e51440e1f
	I0914 22:12:03.757906   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:03.757913   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:03.757941   29206 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-124911-m02","kind":"nodes","uid":"cd983e44-fc71-4637-af68-c9e7572bc178"}}
	I0914 22:12:03.757974   29206 node.go:124] successfully deleted node "m02"
	I0914 22:12:03.757993   29206 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0914 22:12:03.758014   29206 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0914 22:12:03.758035   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x7lse3.l3duenahr204pd74 --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-124911-m02"
	I0914 22:12:03.808830   29206 command_runner.go:130] ! W0914 22:12:03.798009    2597 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0914 22:12:03.809298   29206 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0914 22:12:03.953645   29206 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0914 22:12:03.953674   29206 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0914 22:12:04.705563   29206 command_runner.go:130] > [preflight] Running pre-flight checks
	I0914 22:12:04.705601   29206 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0914 22:12:04.705617   29206 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0914 22:12:04.705629   29206 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:12:04.705642   29206 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:12:04.705651   29206 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0914 22:12:04.705675   29206 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0914 22:12:04.705684   29206 command_runner.go:130] > This node has joined the cluster:
	I0914 22:12:04.705693   29206 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0914 22:12:04.705703   29206 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0914 22:12:04.705712   29206 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0914 22:12:04.705744   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0914 22:12:04.997341   29206 start.go:306] JoinCluster complete in 4.695692147s
	I0914 22:12:04.997369   29206 cni.go:84] Creating CNI manager for ""
	I0914 22:12:04.997375   29206 cni.go:136] 3 nodes found, recommending kindnet
	I0914 22:12:04.997424   29206 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 22:12:05.003194   29206 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0914 22:12:05.003220   29206 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0914 22:12:05.003230   29206 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0914 22:12:05.003240   29206 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:12:05.003248   29206 command_runner.go:130] > Access: 2023-09-14 22:09:36.137726050 +0000
	I0914 22:12:05.003257   29206 command_runner.go:130] > Modify: 2023-09-13 23:09:37.000000000 +0000
	I0914 22:12:05.003265   29206 command_runner.go:130] > Change: 2023-09-14 22:09:34.480726050 +0000
	I0914 22:12:05.003271   29206 command_runner.go:130] >  Birth: -
	I0914 22:12:05.003352   29206 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 22:12:05.003370   29206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 22:12:05.021497   29206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 22:12:05.406284   29206 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0914 22:12:05.410604   29206 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0914 22:12:05.413136   29206 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0914 22:12:05.423557   29206 command_runner.go:130] > daemonset.apps/kindnet configured
	I0914 22:12:05.426804   29206 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:12:05.427059   29206 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:12:05.427372   29206 round_trippers.go:463] GET https://192.168.39.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 22:12:05.427385   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.427393   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.427398   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.429807   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:12:05.429831   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.429842   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.429854   29206 round_trippers.go:580]     Content-Length: 291
	I0914 22:12:05.429867   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.429880   29206 round_trippers.go:580]     Audit-Id: 2cd6bd92-b779-4eec-9ab2-a4dede34f40c
	I0914 22:12:05.429892   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.429902   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.429917   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.429943   29206 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d40ee9-9834-4f82-84c2-51e3c14c181f","resourceVersion":"894","creationTimestamp":"2023-09-14T21:59:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0914 22:12:05.430043   29206 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-124911" context rescaled to 1 replicas
	I0914 22:12:05.430076   29206 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0914 22:12:05.431933   29206 out.go:177] * Verifying Kubernetes components...
	I0914 22:12:05.433212   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:12:05.447518   29206 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:12:05.447851   29206 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:12:05.448094   29206 node_ready.go:35] waiting up to 6m0s for node "multinode-124911-m02" to be "Ready" ...
	I0914 22:12:05.448162   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:12:05.448171   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.448178   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.448184   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.451238   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:12:05.451263   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.451273   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.451282   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.451290   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.451297   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.451302   29206 round_trippers.go:580]     Audit-Id: 0831e74c-a4a1-48ec-a411-003088691511
	I0914 22:12:05.451315   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.451454   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"8e34404b-42e6-43f4-a225-55ff2168406c","resourceVersion":"1041","creationTimestamp":"2023-09-14T22:12:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:12:04Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0914 22:12:05.451753   29206 node_ready.go:49] node "multinode-124911-m02" has status "Ready":"True"
	I0914 22:12:05.451770   29206 node_ready.go:38] duration metric: took 3.659711ms waiting for node "multinode-124911-m02" to be "Ready" ...
	I0914 22:12:05.451783   29206 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:12:05.451852   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 22:12:05.451861   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.451869   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.451881   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.456808   29206 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:12:05.456822   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.456828   29206 round_trippers.go:580]     Audit-Id: ca3a9652-9d56-4cbb-82ba-38a60488a8be
	I0914 22:12:05.456834   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.456839   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.456846   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.456854   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.456867   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.458553   29206 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1047"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"890","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82246 chars]
	I0914 22:12:05.462025   29206 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:05.462101   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:12:05.462113   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.462124   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.462134   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.465020   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:12:05.465035   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.465041   29206 round_trippers.go:580]     Audit-Id: 9ddc9a25-69e3-496a-b761-06ef27786417
	I0914 22:12:05.465047   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.465055   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.465069   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.465077   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.465088   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.465249   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"890","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0914 22:12:05.465729   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:12:05.465743   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.465754   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.465763   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.467865   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:12:05.467878   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.467884   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.467889   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.467894   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.467899   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.467906   29206 round_trippers.go:580]     Audit-Id: 902ca901-1f76-449d-95dd-5805d0d09274
	I0914 22:12:05.467916   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.468260   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:12:05.468547   29206 pod_ready.go:92] pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace has status "Ready":"True"
	I0914 22:12:05.468560   29206 pod_ready.go:81] duration metric: took 6.514457ms waiting for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:05.468568   29206 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:05.468618   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-124911
	I0914 22:12:05.468627   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.468634   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.468641   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.470497   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:12:05.470510   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.470518   29206 round_trippers.go:580]     Audit-Id: 0bc5e7c2-bb3d-4848-ba4d-f865b9048028
	I0914 22:12:05.470523   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.470531   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.470539   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.470548   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.470557   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.470777   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-124911","namespace":"kube-system","uid":"1b195f1a-48a6-4b46-a819-2aeb9fe4e00c","resourceVersion":"882","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.116:2379","kubernetes.io/config.hash":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.mirror":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.seen":"2023-09-14T21:59:20.641783376Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0914 22:12:05.471158   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:12:05.471173   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.471184   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.471200   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.473476   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:12:05.473488   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.473494   29206 round_trippers.go:580]     Audit-Id: b57b1104-b7d0-4d82-ae49-fede8809f858
	I0914 22:12:05.473499   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.473504   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.473516   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.473533   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.473541   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.473726   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:12:05.474078   29206 pod_ready.go:92] pod "etcd-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:12:05.474093   29206 pod_ready.go:81] duration metric: took 5.51928ms waiting for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:05.474109   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:05.474166   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124911
	I0914 22:12:05.474176   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.474184   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.474196   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.476214   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:12:05.476227   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.476234   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.476239   29206 round_trippers.go:580]     Audit-Id: 1af0ed90-66eb-492b-875c-8fa78ca779ec
	I0914 22:12:05.476244   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.476252   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.476263   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.476272   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.476412   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-124911","namespace":"kube-system","uid":"e9a93d33-82f3-4cfe-9b2c-92560dd09d09","resourceVersion":"849","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.116:8443","kubernetes.io/config.hash":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.mirror":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.seen":"2023-09-14T21:59:20.641778793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0914 22:12:05.476803   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:12:05.476816   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.476823   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.476829   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.479253   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:12:05.479265   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.479271   29206 round_trippers.go:580]     Audit-Id: 797db624-eef6-44fc-a8ea-418c4bb370f5
	I0914 22:12:05.479276   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.479283   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.479292   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.479301   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.479314   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.479450   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:12:05.479808   29206 pod_ready.go:92] pod "kube-apiserver-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:12:05.479821   29206 pod_ready.go:81] duration metric: took 5.701553ms waiting for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:05.479829   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:05.479862   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124911
	I0914 22:12:05.479870   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.479877   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.479883   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.481850   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:12:05.481862   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.481868   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.481873   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.481878   29206 round_trippers.go:580]     Audit-Id: 5ad63d6f-450a-43d7-9955-2e6e542efa54
	I0914 22:12:05.481888   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.481897   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.481909   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.482224   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-124911","namespace":"kube-system","uid":"3efae123-9cdd-457a-a317-77370a6c5288","resourceVersion":"854","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.mirror":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.seen":"2023-09-14T21:59:20.641781682Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0914 22:12:05.482524   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:12:05.482535   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.482541   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.482547   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.484307   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:12:05.484319   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.484325   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.484332   29206 round_trippers.go:580]     Audit-Id: ccf4e605-3c77-4255-8d12-b784f3032ea6
	I0914 22:12:05.484342   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.484354   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.484366   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.484375   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.484512   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:12:05.484897   29206 pod_ready.go:92] pod "kube-controller-manager-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:12:05.484914   29206 pod_ready.go:81] duration metric: took 5.079304ms waiting for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:05.484925   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:05.648231   29206 request.go:629] Waited for 163.238231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kd4p
	I0914 22:12:05.648329   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kd4p
	I0914 22:12:05.648338   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.648349   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.648360   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.652581   29206 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:12:05.652619   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.652631   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.652640   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.652648   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.652657   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.652665   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.652676   29206 round_trippers.go:580]     Audit-Id: 3bda30cc-6946-405e-aea6-f6e0a4456d0c
	I0914 22:12:05.652807   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2kd4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"de9e2ee3-364a-447b-9d7f-be85d86838ae","resourceVersion":"820","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0914 22:12:05.848679   29206 request.go:629] Waited for 195.345969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:12:05.848733   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:12:05.848768   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:05.848779   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:05.848785   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:05.851958   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:12:05.851984   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:05.851995   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:05 GMT
	I0914 22:12:05.852004   29206 round_trippers.go:580]     Audit-Id: 29eaa523-451d-4166-a3f5-73d56695bf9b
	I0914 22:12:05.852013   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:05.852022   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:05.852031   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:05.852040   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:05.852250   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:12:05.852686   29206 pod_ready.go:92] pod "kube-proxy-2kd4p" in "kube-system" namespace has status "Ready":"True"
	I0914 22:12:05.852707   29206 pod_ready.go:81] duration metric: took 367.770149ms waiting for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:05.852720   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5tcff" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:06.049187   29206 request.go:629] Waited for 196.382987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:12:06.049258   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:12:06.049266   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:06.049277   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:06.049289   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:06.051790   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:12:06.051814   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:06.051825   29206 round_trippers.go:580]     Audit-Id: 1071923e-03c8-4aff-990a-ea36652f450c
	I0914 22:12:06.051834   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:06.051843   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:06.051852   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:06.051865   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:06.051874   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:06 GMT
	I0914 22:12:06.051994   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5tcff","generateName":"kube-proxy-","namespace":"kube-system","uid":"bfc8d22f-954e-4a49-878e-9d1711d49c40","resourceVersion":"705","creationTimestamp":"2023-09-14T22:01:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0914 22:12:06.248909   29206 request.go:629] Waited for 196.384806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:12:06.248989   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:12:06.249004   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:06.249030   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:06.249043   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:06.251947   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:12:06.251964   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:06.251970   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:06 GMT
	I0914 22:12:06.251975   29206 round_trippers.go:580]     Audit-Id: 2ea8a49d-fa9e-421d-88ad-5d4df07e996f
	I0914 22:12:06.251980   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:06.251985   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:06.251990   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:06.251996   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:06.252624   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m03","uid":"5e8b04da-e8ae-403d-9e94-bb008093a0b9","resourceVersion":"839","creationTimestamp":"2023-09-14T22:02:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:02:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0914 22:12:06.252865   29206 pod_ready.go:92] pod "kube-proxy-5tcff" in "kube-system" namespace has status "Ready":"True"
	I0914 22:12:06.252876   29206 pod_ready.go:81] duration metric: took 400.149143ms waiting for pod "kube-proxy-5tcff" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:06.252886   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4qjg" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:06.448686   29206 request.go:629] Waited for 195.733955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4qjg
	I0914 22:12:06.448742   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4qjg
	I0914 22:12:06.448747   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:06.448755   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:06.448762   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:06.453448   29206 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:12:06.453476   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:06.453486   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:06.453491   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:06.453497   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:06 GMT
	I0914 22:12:06.453502   29206 round_trippers.go:580]     Audit-Id: af5beed8-0ce0-4c6f-a6d6-c73f60d7f3c2
	I0914 22:12:06.453507   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:06.453512   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:06.453680   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4qjg","generateName":"kube-proxy-","namespace":"kube-system","uid":"8214b42e-6656-4e01-bc47-82d6ab6592e5","resourceVersion":"1061","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0914 22:12:06.648373   29206 request.go:629] Waited for 194.17638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:12:06.648427   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:12:06.648432   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:06.648451   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:06.648460   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:06.651165   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:12:06.651186   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:06.651196   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:06.651205   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:06.651212   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:06.651220   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:06.651229   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:06 GMT
	I0914 22:12:06.651238   29206 round_trippers.go:580]     Audit-Id: 7013d24f-759e-4d50-8df3-75a0b4d19a75
	I0914 22:12:06.651417   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"8e34404b-42e6-43f4-a225-55ff2168406c","resourceVersion":"1041","creationTimestamp":"2023-09-14T22:12:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:12:04Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0914 22:12:06.651784   29206 pod_ready.go:92] pod "kube-proxy-c4qjg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:12:06.651807   29206 pod_ready.go:81] duration metric: took 398.911576ms waiting for pod "kube-proxy-c4qjg" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:06.651820   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:06.849281   29206 request.go:629] Waited for 197.382971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 22:12:06.849378   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 22:12:06.849386   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:06.849395   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:06.849413   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:06.853006   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:12:06.853032   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:06.853039   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:06.853045   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:06.853050   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:06.853056   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:06 GMT
	I0914 22:12:06.853061   29206 round_trippers.go:580]     Audit-Id: 8dc6282f-5c74-47b8-bd35-c9a1aadd9e2c
	I0914 22:12:06.853069   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:06.853581   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-124911","namespace":"kube-system","uid":"f8d502b7-9ee7-474e-ab64-9f721ee6970e","resourceVersion":"864","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.mirror":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.seen":"2023-09-14T21:59:20.641782607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0914 22:12:07.048348   29206 request.go:629] Waited for 194.2432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:12:07.048417   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:12:07.048422   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:07.048432   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:07.048441   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:07.051329   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:12:07.051351   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:07.051360   29206 round_trippers.go:580]     Audit-Id: 478817f8-9404-4c5d-b1f5-f031dd2a52a0
	I0914 22:12:07.051368   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:07.051376   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:07.051383   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:07.051391   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:07.051401   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:07 GMT
	I0914 22:12:07.051968   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:12:07.052370   29206 pod_ready.go:92] pod "kube-scheduler-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:12:07.052389   29206 pod_ready.go:81] duration metric: took 400.561768ms waiting for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:12:07.052398   29206 pod_ready.go:38] duration metric: took 1.600599658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:12:07.052410   29206 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:12:07.052459   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:12:07.065333   29206 system_svc.go:56] duration metric: took 12.916882ms WaitForService to wait for kubelet.
	I0914 22:12:07.065353   29206 kubeadm.go:581] duration metric: took 1.635252984s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:12:07.065372   29206 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:12:07.249066   29206 request.go:629] Waited for 183.637191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes
	I0914 22:12:07.249121   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes
	I0914 22:12:07.249126   29206 round_trippers.go:469] Request Headers:
	I0914 22:12:07.249134   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:12:07.249140   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:12:07.252390   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:12:07.252403   29206 round_trippers.go:577] Response Headers:
	I0914 22:12:07.252409   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:12:07.252414   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:12:07.252420   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:12:07.252425   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:12:07 GMT
	I0914 22:12:07.252430   29206 round_trippers.go:580]     Audit-Id: 9456fa4d-f361-4df1-bf4c-2d3a131e95f0
	I0914 22:12:07.252435   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:12:07.252931   29206 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1063"},"items":[{"metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15389 chars]
	I0914 22:12:07.253499   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:12:07.253517   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:12:07.253525   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:12:07.253529   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:12:07.253537   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:12:07.253541   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:12:07.253544   29206 node_conditions.go:105] duration metric: took 188.168081ms to run NodePressure ...
	I0914 22:12:07.253554   29206 start.go:228] waiting for startup goroutines ...
	I0914 22:12:07.253576   29206 start.go:242] writing updated cluster config ...
	I0914 22:12:07.253986   29206 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:12:07.254067   29206 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 22:12:07.256348   29206 out.go:177] * Starting worker node multinode-124911-m03 in cluster multinode-124911
	I0914 22:12:07.257843   29206 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:12:07.257860   29206 cache.go:57] Caching tarball of preloaded images
	I0914 22:12:07.257959   29206 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 22:12:07.257971   29206 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 22:12:07.258049   29206 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/config.json ...
	I0914 22:12:07.258218   29206 start.go:365] acquiring machines lock for multinode-124911-m03: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:12:07.258257   29206 start.go:369] acquired machines lock for "multinode-124911-m03" in 22.286µs
	I0914 22:12:07.258271   29206 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:12:07.258278   29206 fix.go:54] fixHost starting: m03
	I0914 22:12:07.258532   29206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:12:07.258567   29206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:12:07.273011   29206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0914 22:12:07.273410   29206 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:12:07.273821   29206 main.go:141] libmachine: Using API Version  1
	I0914 22:12:07.273845   29206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:12:07.274188   29206 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:12:07.274352   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .DriverName
	I0914 22:12:07.274521   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetState
	I0914 22:12:07.276314   29206 fix.go:102] recreateIfNeeded on multinode-124911-m03: state=Running err=<nil>
	W0914 22:12:07.276329   29206 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:12:07.278454   29206 out.go:177] * Updating the running kvm2 "multinode-124911-m03" VM ...
	I0914 22:12:07.279939   29206 machine.go:88] provisioning docker machine ...
	I0914 22:12:07.279956   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .DriverName
	I0914 22:12:07.280129   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetMachineName
	I0914 22:12:07.280294   29206 buildroot.go:166] provisioning hostname "multinode-124911-m03"
	I0914 22:12:07.280315   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetMachineName
	I0914 22:12:07.280548   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHHostname
	I0914 22:12:07.282673   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.283145   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:12:07.283176   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.283294   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHPort
	I0914 22:12:07.283459   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:12:07.283623   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:12:07.283773   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHUsername
	I0914 22:12:07.283927   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:12:07.284214   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0914 22:12:07.284227   29206 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-124911-m03 && echo "multinode-124911-m03" | sudo tee /etc/hostname
	I0914 22:12:07.420295   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-124911-m03
	
	I0914 22:12:07.420325   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHHostname
	I0914 22:12:07.422828   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.423157   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:12:07.423191   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.423342   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHPort
	I0914 22:12:07.423557   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:12:07.423716   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:12:07.423904   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHUsername
	I0914 22:12:07.424089   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:12:07.424400   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0914 22:12:07.424420   29206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-124911-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-124911-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-124911-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:12:07.544313   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:12:07.544342   29206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:12:07.544358   29206 buildroot.go:174] setting up certificates
	I0914 22:12:07.544365   29206 provision.go:83] configureAuth start
	I0914 22:12:07.544374   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetMachineName
	I0914 22:12:07.544693   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetIP
	I0914 22:12:07.547248   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.547614   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:12:07.547647   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.547831   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHHostname
	I0914 22:12:07.550260   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.550682   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:12:07.550721   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.550846   29206 provision.go:138] copyHostCerts
	I0914 22:12:07.550875   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:12:07.550904   29206 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:12:07.550914   29206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:12:07.550977   29206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:12:07.551043   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:12:07.551059   29206 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:12:07.551065   29206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:12:07.551088   29206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:12:07.551130   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:12:07.551147   29206 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:12:07.551153   29206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:12:07.551179   29206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:12:07.551267   29206 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.multinode-124911-m03 san=[192.168.39.207 192.168.39.207 localhost 127.0.0.1 minikube multinode-124911-m03]
	I0914 22:12:07.666257   29206 provision.go:172] copyRemoteCerts
	I0914 22:12:07.666312   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:12:07.666333   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHHostname
	I0914 22:12:07.668761   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.669174   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:12:07.669203   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.669335   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHPort
	I0914 22:12:07.669538   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:12:07.669689   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHUsername
	I0914 22:12:07.669858   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m03/id_rsa Username:docker}
	I0914 22:12:07.760434   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 22:12:07.760495   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:12:07.782763   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 22:12:07.782822   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0914 22:12:07.803509   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 22:12:07.803574   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:12:07.825069   29206 provision.go:86] duration metric: configureAuth took 280.693664ms
	I0914 22:12:07.825089   29206 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:12:07.825324   29206 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:12:07.825405   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHHostname
	I0914 22:12:07.828012   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.828523   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:12:07.828558   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:12:07.828744   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHPort
	I0914 22:12:07.828994   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:12:07.829159   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:12:07.829340   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHUsername
	I0914 22:12:07.829505   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:12:07.829942   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0914 22:12:07.829973   29206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:13:38.329845   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:13:38.329877   29206 machine.go:91] provisioned docker machine in 1m31.049922738s
	I0914 22:13:38.329891   29206 start.go:300] post-start starting for "multinode-124911-m03" (driver="kvm2")
	I0914 22:13:38.329903   29206 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:13:38.329924   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .DriverName
	I0914 22:13:38.330242   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:13:38.330288   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHHostname
	I0914 22:13:38.333346   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:38.333810   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:13:38.333840   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:38.334006   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHPort
	I0914 22:13:38.334216   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:13:38.334394   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHUsername
	I0914 22:13:38.334558   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m03/id_rsa Username:docker}
	I0914 22:13:38.425650   29206 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:13:38.429538   29206 command_runner.go:130] > NAME=Buildroot
	I0914 22:13:38.429563   29206 command_runner.go:130] > VERSION=2021.02.12-1-g52d8811-dirty
	I0914 22:13:38.429570   29206 command_runner.go:130] > ID=buildroot
	I0914 22:13:38.429578   29206 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 22:13:38.429585   29206 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 22:13:38.429875   29206 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:13:38.429897   29206 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:13:38.429982   29206 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:13:38.430054   29206 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:13:38.430063   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /etc/ssl/certs/134852.pem
	I0914 22:13:38.430144   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:13:38.438676   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:13:38.460395   29206 start.go:303] post-start completed in 130.490597ms
	I0914 22:13:38.460429   29206 fix.go:56] fixHost completed within 1m31.202149452s
	I0914 22:13:38.460456   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHHostname
	I0914 22:13:38.463210   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:38.463659   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:13:38.463687   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:38.463866   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHPort
	I0914 22:13:38.464054   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:13:38.464213   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:13:38.464369   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHUsername
	I0914 22:13:38.464531   29206 main.go:141] libmachine: Using SSH client type: native
	I0914 22:13:38.464898   29206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0914 22:13:38.464913   29206 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:13:38.587909   29206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694729618.578650288
	
	I0914 22:13:38.587936   29206 fix.go:206] guest clock: 1694729618.578650288
	I0914 22:13:38.587948   29206 fix.go:219] Guest: 2023-09-14 22:13:38.578650288 +0000 UTC Remote: 2023-09-14 22:13:38.460435185 +0000 UTC m=+552.195252480 (delta=118.215103ms)
	I0914 22:13:38.587971   29206 fix.go:190] guest clock delta is within tolerance: 118.215103ms
	I0914 22:13:38.587978   29206 start.go:83] releasing machines lock for "multinode-124911-m03", held for 1m31.329711136s
	I0914 22:13:38.588006   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .DriverName
	I0914 22:13:38.588300   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetIP
	I0914 22:13:38.590947   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:38.591268   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:13:38.591287   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:38.593413   29206 out.go:177] * Found network options:
	I0914 22:13:38.594965   29206 out.go:177]   - NO_PROXY=192.168.39.116,192.168.39.254
	W0914 22:13:38.596363   29206 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 22:13:38.596382   29206 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 22:13:38.596394   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .DriverName
	I0914 22:13:38.596934   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .DriverName
	I0914 22:13:38.597120   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .DriverName
	I0914 22:13:38.597230   29206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:13:38.597265   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHHostname
	W0914 22:13:38.597410   29206 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 22:13:38.597433   29206 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 22:13:38.597493   29206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:13:38.597508   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHHostname
	I0914 22:13:38.600310   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:38.600345   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:38.600716   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:13:38.600750   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:38.600778   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:13:38.600805   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:38.600874   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHPort
	I0914 22:13:38.601012   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHPort
	I0914 22:13:38.601081   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:13:38.601160   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHKeyPath
	I0914 22:13:38.601222   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHUsername
	I0914 22:13:38.601292   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetSSHUsername
	I0914 22:13:38.601366   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m03/id_rsa Username:docker}
	I0914 22:13:38.601416   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m03/id_rsa Username:docker}
	I0914 22:13:38.833277   29206 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 22:13:38.833372   29206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 22:13:38.839025   29206 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 22:13:38.839159   29206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:13:38.839243   29206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:13:38.847894   29206 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 22:13:38.847914   29206 start.go:469] detecting cgroup driver to use...
	I0914 22:13:38.847978   29206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:13:38.861923   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:13:38.875600   29206 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:13:38.875661   29206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:13:38.891184   29206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:13:38.905142   29206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:13:39.025478   29206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:13:39.138770   29206 docker.go:212] disabling docker service ...
	I0914 22:13:39.138860   29206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:13:39.152683   29206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:13:39.164769   29206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:13:39.282335   29206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:13:39.398644   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:13:39.411581   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:13:39.428365   29206 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 22:13:39.428409   29206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:13:39.428448   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:13:39.437851   29206 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:13:39.437893   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:13:39.447334   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:13:39.456522   29206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:13:39.465733   29206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:13:39.475332   29206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:13:39.483480   29206 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 22:13:39.483602   29206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:13:39.492187   29206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:13:39.609544   29206 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:13:39.817367   29206 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:13:39.817428   29206 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:13:39.822588   29206 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 22:13:39.822615   29206 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 22:13:39.822623   29206 command_runner.go:130] > Device: 16h/22d	Inode: 1206        Links: 1
	I0914 22:13:39.822629   29206 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:13:39.822634   29206 command_runner.go:130] > Access: 2023-09-14 22:13:39.740578026 +0000
	I0914 22:13:39.822641   29206 command_runner.go:130] > Modify: 2023-09-14 22:13:39.740578026 +0000
	I0914 22:13:39.822647   29206 command_runner.go:130] > Change: 2023-09-14 22:13:39.740578026 +0000
	I0914 22:13:39.822650   29206 command_runner.go:130] >  Birth: -
	I0914 22:13:39.822667   29206 start.go:537] Will wait 60s for crictl version
	I0914 22:13:39.822714   29206 ssh_runner.go:195] Run: which crictl
	I0914 22:13:39.826335   29206 command_runner.go:130] > /usr/bin/crictl
	I0914 22:13:39.826389   29206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:13:39.856502   29206 command_runner.go:130] > Version:  0.1.0
	I0914 22:13:39.856521   29206 command_runner.go:130] > RuntimeName:  cri-o
	I0914 22:13:39.856526   29206 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0914 22:13:39.856532   29206 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0914 22:13:39.857433   29206 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:13:39.857506   29206 ssh_runner.go:195] Run: crio --version
	I0914 22:13:39.908429   29206 command_runner.go:130] > crio version 1.24.1
	I0914 22:13:39.908449   29206 command_runner.go:130] > Version:          1.24.1
	I0914 22:13:39.908461   29206 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0914 22:13:39.908466   29206 command_runner.go:130] > GitTreeState:     dirty
	I0914 22:13:39.908472   29206 command_runner.go:130] > BuildDate:        2023-09-13T22:47:54Z
	I0914 22:13:39.908476   29206 command_runner.go:130] > GoVersion:        go1.19.9
	I0914 22:13:39.908480   29206 command_runner.go:130] > Compiler:         gc
	I0914 22:13:39.908485   29206 command_runner.go:130] > Platform:         linux/amd64
	I0914 22:13:39.908491   29206 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:13:39.908504   29206 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:13:39.908509   29206 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:13:39.908517   29206 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:13:39.908612   29206 ssh_runner.go:195] Run: crio --version
	I0914 22:13:39.954796   29206 command_runner.go:130] > crio version 1.24.1
	I0914 22:13:39.954821   29206 command_runner.go:130] > Version:          1.24.1
	I0914 22:13:39.954834   29206 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0914 22:13:39.954842   29206 command_runner.go:130] > GitTreeState:     dirty
	I0914 22:13:39.954850   29206 command_runner.go:130] > BuildDate:        2023-09-13T22:47:54Z
	I0914 22:13:39.954857   29206 command_runner.go:130] > GoVersion:        go1.19.9
	I0914 22:13:39.954863   29206 command_runner.go:130] > Compiler:         gc
	I0914 22:13:39.954869   29206 command_runner.go:130] > Platform:         linux/amd64
	I0914 22:13:39.954879   29206 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:13:39.954889   29206 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:13:39.954897   29206 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:13:39.954932   29206 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:13:39.957976   29206 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:13:39.959256   29206 out.go:177]   - env NO_PROXY=192.168.39.116
	I0914 22:13:39.960606   29206 out.go:177]   - env NO_PROXY=192.168.39.116,192.168.39.254
	I0914 22:13:39.962063   29206 main.go:141] libmachine: (multinode-124911-m03) Calling .GetIP
	I0914 22:13:39.964629   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:39.965034   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:51:db", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:02:07 +0000 UTC Type:0 Mac:52:54:00:28:51:db Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-124911-m03 Clientid:01:52:54:00:28:51:db}
	I0914 22:13:39.965071   29206 main.go:141] libmachine: (multinode-124911-m03) DBG | domain multinode-124911-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:28:51:db in network mk-multinode-124911
	I0914 22:13:39.965249   29206 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:13:39.968958   29206 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0914 22:13:39.969174   29206 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911 for IP: 192.168.39.207
	I0914 22:13:39.969203   29206 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:13:39.969355   29206 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:13:39.969400   29206 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:13:39.969412   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 22:13:39.969423   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 22:13:39.969436   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 22:13:39.969448   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 22:13:39.969495   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:13:39.969522   29206 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:13:39.969532   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:13:39.969557   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:13:39.969633   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:13:39.969673   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:13:39.969726   29206 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:13:39.969752   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem -> /usr/share/ca-certificates/13485.pem
	I0914 22:13:39.969766   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> /usr/share/ca-certificates/134852.pem
	I0914 22:13:39.969778   29206 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:13:39.970069   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:13:39.992571   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:13:40.014275   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:13:40.036012   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:13:40.057925   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:13:40.078944   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:13:40.100594   29206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:13:40.122002   29206 ssh_runner.go:195] Run: openssl version
	I0914 22:13:40.127059   29206 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0914 22:13:40.127294   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:13:40.135962   29206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:13:40.140028   29206 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:13:40.140075   29206 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:13:40.140118   29206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:13:40.145307   29206 command_runner.go:130] > b5213941
	I0914 22:13:40.145606   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:13:40.153118   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:13:40.162113   29206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:13:40.166062   29206 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:13:40.166204   29206 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:13:40.166254   29206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:13:40.170907   29206 command_runner.go:130] > 51391683
	I0914 22:13:40.171032   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:13:40.178320   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:13:40.186960   29206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:13:40.190962   29206 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:13:40.191162   29206 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:13:40.191203   29206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:13:40.196545   29206 command_runner.go:130] > 3ec20f2e
	I0914 22:13:40.196764   29206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:13:40.205247   29206 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:13:40.208924   29206 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:13:40.209281   29206 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:13:40.209369   29206 ssh_runner.go:195] Run: crio config
	I0914 22:13:40.257514   29206 command_runner.go:130] ! time="2023-09-14 22:13:40.248382631Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0914 22:13:40.257549   29206 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 22:13:40.264297   29206 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 22:13:40.264315   29206 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 22:13:40.264322   29206 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 22:13:40.264325   29206 command_runner.go:130] > #
	I0914 22:13:40.264332   29206 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 22:13:40.264338   29206 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 22:13:40.264343   29206 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 22:13:40.264350   29206 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 22:13:40.264354   29206 command_runner.go:130] > # reload'.
	I0914 22:13:40.264359   29206 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 22:13:40.264366   29206 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 22:13:40.264372   29206 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 22:13:40.264377   29206 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 22:13:40.264384   29206 command_runner.go:130] > [crio]
	I0914 22:13:40.264390   29206 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 22:13:40.264396   29206 command_runner.go:130] > # containers images, in this directory.
	I0914 22:13:40.264400   29206 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0914 22:13:40.264416   29206 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 22:13:40.264423   29206 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0914 22:13:40.264430   29206 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 22:13:40.264445   29206 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 22:13:40.264455   29206 command_runner.go:130] > storage_driver = "overlay"
	I0914 22:13:40.264467   29206 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 22:13:40.264477   29206 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 22:13:40.264487   29206 command_runner.go:130] > storage_option = [
	I0914 22:13:40.264496   29206 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0914 22:13:40.264502   29206 command_runner.go:130] > ]
	I0914 22:13:40.264512   29206 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 22:13:40.264525   29206 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 22:13:40.264536   29206 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 22:13:40.264547   29206 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 22:13:40.264555   29206 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 22:13:40.264560   29206 command_runner.go:130] > # always happen on a node reboot
	I0914 22:13:40.264568   29206 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 22:13:40.264573   29206 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 22:13:40.264580   29206 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 22:13:40.264592   29206 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 22:13:40.264597   29206 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0914 22:13:40.264605   29206 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 22:13:40.264617   29206 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 22:13:40.264624   29206 command_runner.go:130] > # internal_wipe = true
	I0914 22:13:40.264629   29206 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 22:13:40.264638   29206 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 22:13:40.264646   29206 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 22:13:40.264654   29206 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 22:13:40.264660   29206 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 22:13:40.264664   29206 command_runner.go:130] > [crio.api]
	I0914 22:13:40.264669   29206 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 22:13:40.264675   29206 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 22:13:40.264680   29206 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 22:13:40.264687   29206 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 22:13:40.264693   29206 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 22:13:40.264701   29206 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 22:13:40.264706   29206 command_runner.go:130] > # stream_port = "0"
	I0914 22:13:40.264712   29206 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 22:13:40.264717   29206 command_runner.go:130] > # stream_enable_tls = false
	I0914 22:13:40.264723   29206 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 22:13:40.264730   29206 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 22:13:40.264736   29206 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 22:13:40.264744   29206 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 22:13:40.264750   29206 command_runner.go:130] > # minutes.
	I0914 22:13:40.264754   29206 command_runner.go:130] > # stream_tls_cert = ""
	I0914 22:13:40.264763   29206 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 22:13:40.264769   29206 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 22:13:40.264786   29206 command_runner.go:130] > # stream_tls_key = ""
	I0914 22:13:40.264795   29206 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 22:13:40.264801   29206 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 22:13:40.264808   29206 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 22:13:40.264813   29206 command_runner.go:130] > # stream_tls_ca = ""
	I0914 22:13:40.264823   29206 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:13:40.264829   29206 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0914 22:13:40.264837   29206 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:13:40.264844   29206 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0914 22:13:40.264864   29206 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 22:13:40.264874   29206 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 22:13:40.264878   29206 command_runner.go:130] > [crio.runtime]
	I0914 22:13:40.264884   29206 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 22:13:40.264892   29206 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 22:13:40.264896   29206 command_runner.go:130] > # "nofile=1024:2048"
	I0914 22:13:40.264902   29206 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 22:13:40.264909   29206 command_runner.go:130] > # default_ulimits = [
	I0914 22:13:40.264913   29206 command_runner.go:130] > # ]
	I0914 22:13:40.264921   29206 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 22:13:40.264926   29206 command_runner.go:130] > # no_pivot = false
	I0914 22:13:40.264932   29206 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 22:13:40.264940   29206 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 22:13:40.264947   29206 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 22:13:40.264954   29206 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 22:13:40.264961   29206 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 22:13:40.264970   29206 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:13:40.264977   29206 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0914 22:13:40.264981   29206 command_runner.go:130] > # Cgroup setting for conmon
	I0914 22:13:40.264990   29206 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 22:13:40.264995   29206 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 22:13:40.265001   29206 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 22:13:40.265009   29206 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 22:13:40.265017   29206 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:13:40.265023   29206 command_runner.go:130] > conmon_env = [
	I0914 22:13:40.265029   29206 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 22:13:40.265034   29206 command_runner.go:130] > ]
	I0914 22:13:40.265040   29206 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 22:13:40.265047   29206 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 22:13:40.265055   29206 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 22:13:40.265060   29206 command_runner.go:130] > # default_env = [
	I0914 22:13:40.265064   29206 command_runner.go:130] > # ]
	I0914 22:13:40.265072   29206 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 22:13:40.265076   29206 command_runner.go:130] > # selinux = false
	I0914 22:13:40.265085   29206 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 22:13:40.265094   29206 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 22:13:40.265101   29206 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 22:13:40.265108   29206 command_runner.go:130] > # seccomp_profile = ""
	I0914 22:13:40.265114   29206 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 22:13:40.265122   29206 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 22:13:40.265130   29206 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 22:13:40.265137   29206 command_runner.go:130] > # which might increase security.
	I0914 22:13:40.265142   29206 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0914 22:13:40.265150   29206 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 22:13:40.265156   29206 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 22:13:40.265164   29206 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 22:13:40.265170   29206 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 22:13:40.265177   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:13:40.265182   29206 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 22:13:40.265190   29206 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 22:13:40.265195   29206 command_runner.go:130] > # the cgroup blockio controller.
	I0914 22:13:40.265199   29206 command_runner.go:130] > # blockio_config_file = ""
	I0914 22:13:40.265208   29206 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 22:13:40.265215   29206 command_runner.go:130] > # irqbalance daemon.
	I0914 22:13:40.265220   29206 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 22:13:40.265229   29206 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 22:13:40.265236   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:13:40.265243   29206 command_runner.go:130] > # rdt_config_file = ""
	I0914 22:13:40.265249   29206 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 22:13:40.265255   29206 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 22:13:40.265261   29206 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 22:13:40.265267   29206 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 22:13:40.265273   29206 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 22:13:40.265282   29206 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 22:13:40.265288   29206 command_runner.go:130] > # will be added.
	I0914 22:13:40.265293   29206 command_runner.go:130] > # default_capabilities = [
	I0914 22:13:40.265300   29206 command_runner.go:130] > # 	"CHOWN",
	I0914 22:13:40.265304   29206 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 22:13:40.265310   29206 command_runner.go:130] > # 	"FSETID",
	I0914 22:13:40.265314   29206 command_runner.go:130] > # 	"FOWNER",
	I0914 22:13:40.265320   29206 command_runner.go:130] > # 	"SETGID",
	I0914 22:13:40.265324   29206 command_runner.go:130] > # 	"SETUID",
	I0914 22:13:40.265330   29206 command_runner.go:130] > # 	"SETPCAP",
	I0914 22:13:40.265334   29206 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 22:13:40.265340   29206 command_runner.go:130] > # 	"KILL",
	I0914 22:13:40.265343   29206 command_runner.go:130] > # ]
	I0914 22:13:40.265351   29206 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 22:13:40.265357   29206 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:13:40.265364   29206 command_runner.go:130] > # default_sysctls = [
	I0914 22:13:40.265368   29206 command_runner.go:130] > # ]
	I0914 22:13:40.265375   29206 command_runner.go:130] > # List of devices on the host that a
	I0914 22:13:40.265381   29206 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 22:13:40.265389   29206 command_runner.go:130] > # allowed_devices = [
	I0914 22:13:40.265396   29206 command_runner.go:130] > # 	"/dev/fuse",
	I0914 22:13:40.265400   29206 command_runner.go:130] > # ]
	I0914 22:13:40.265407   29206 command_runner.go:130] > # List of additional devices. specified as
	I0914 22:13:40.265414   29206 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 22:13:40.265422   29206 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 22:13:40.265439   29206 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:13:40.265446   29206 command_runner.go:130] > # additional_devices = [
	I0914 22:13:40.265450   29206 command_runner.go:130] > # ]
	I0914 22:13:40.265455   29206 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 22:13:40.265461   29206 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 22:13:40.265465   29206 command_runner.go:130] > # 	"/etc/cdi",
	I0914 22:13:40.265471   29206 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 22:13:40.265475   29206 command_runner.go:130] > # ]
	I0914 22:13:40.265483   29206 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 22:13:40.265490   29206 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 22:13:40.265496   29206 command_runner.go:130] > # Defaults to false.
	I0914 22:13:40.265501   29206 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 22:13:40.265510   29206 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 22:13:40.265518   29206 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 22:13:40.265524   29206 command_runner.go:130] > # hooks_dir = [
	I0914 22:13:40.265530   29206 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 22:13:40.265539   29206 command_runner.go:130] > # ]
	I0914 22:13:40.265549   29206 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 22:13:40.265558   29206 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 22:13:40.265564   29206 command_runner.go:130] > # its default mounts from the following two files:
	I0914 22:13:40.265569   29206 command_runner.go:130] > #
	I0914 22:13:40.265576   29206 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 22:13:40.265585   29206 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 22:13:40.265593   29206 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 22:13:40.265597   29206 command_runner.go:130] > #
	I0914 22:13:40.265607   29206 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 22:13:40.265620   29206 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 22:13:40.265632   29206 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 22:13:40.265644   29206 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 22:13:40.265649   29206 command_runner.go:130] > #
	I0914 22:13:40.265659   29206 command_runner.go:130] > # default_mounts_file = ""
	I0914 22:13:40.265667   29206 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 22:13:40.265682   29206 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 22:13:40.265692   29206 command_runner.go:130] > pids_limit = 1024
	I0914 22:13:40.265704   29206 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 22:13:40.265717   29206 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 22:13:40.265730   29206 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 22:13:40.265746   29206 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 22:13:40.265756   29206 command_runner.go:130] > # log_size_max = -1
	I0914 22:13:40.265770   29206 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0914 22:13:40.265786   29206 command_runner.go:130] > # log_to_journald = false
	I0914 22:13:40.265797   29206 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 22:13:40.265804   29206 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 22:13:40.265812   29206 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 22:13:40.265817   29206 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 22:13:40.265825   29206 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 22:13:40.265832   29206 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 22:13:40.265838   29206 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 22:13:40.265844   29206 command_runner.go:130] > # read_only = false
	I0914 22:13:40.265850   29206 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 22:13:40.265858   29206 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 22:13:40.265864   29206 command_runner.go:130] > # live configuration reload.
	I0914 22:13:40.265869   29206 command_runner.go:130] > # log_level = "info"
	I0914 22:13:40.265875   29206 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 22:13:40.265882   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:13:40.265888   29206 command_runner.go:130] > # log_filter = ""
	I0914 22:13:40.265894   29206 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 22:13:40.265901   29206 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 22:13:40.265907   29206 command_runner.go:130] > # separated by comma.
	I0914 22:13:40.265911   29206 command_runner.go:130] > # uid_mappings = ""
	I0914 22:13:40.265922   29206 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 22:13:40.265930   29206 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 22:13:40.265934   29206 command_runner.go:130] > # separated by comma.
	I0914 22:13:40.265940   29206 command_runner.go:130] > # gid_mappings = ""
	I0914 22:13:40.265947   29206 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 22:13:40.265955   29206 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:13:40.265962   29206 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:13:40.265968   29206 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 22:13:40.265974   29206 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 22:13:40.265982   29206 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:13:40.265989   29206 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:13:40.265995   29206 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 22:13:40.266002   29206 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 22:13:40.266010   29206 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 22:13:40.266018   29206 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 22:13:40.266024   29206 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 22:13:40.266030   29206 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 22:13:40.266038   29206 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 22:13:40.266046   29206 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 22:13:40.266051   29206 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 22:13:40.266060   29206 command_runner.go:130] > drop_infra_ctr = false
	I0914 22:13:40.266069   29206 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 22:13:40.266075   29206 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 22:13:40.266084   29206 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 22:13:40.266090   29206 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 22:13:40.266096   29206 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 22:13:40.266104   29206 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 22:13:40.266112   29206 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 22:13:40.266119   29206 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 22:13:40.266125   29206 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0914 22:13:40.266132   29206 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 22:13:40.266140   29206 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0914 22:13:40.266148   29206 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0914 22:13:40.266155   29206 command_runner.go:130] > # default_runtime = "runc"
	I0914 22:13:40.266160   29206 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 22:13:40.266169   29206 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 22:13:40.266180   29206 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0914 22:13:40.266188   29206 command_runner.go:130] > # creation as a file is not desired either.
	I0914 22:13:40.266195   29206 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 22:13:40.266203   29206 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 22:13:40.266207   29206 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 22:13:40.266213   29206 command_runner.go:130] > # ]
	I0914 22:13:40.266219   29206 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 22:13:40.266227   29206 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 22:13:40.266236   29206 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0914 22:13:40.266244   29206 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0914 22:13:40.266247   29206 command_runner.go:130] > #
	I0914 22:13:40.266254   29206 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0914 22:13:40.266260   29206 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0914 22:13:40.266266   29206 command_runner.go:130] > #  runtime_type = "oci"
	I0914 22:13:40.266271   29206 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0914 22:13:40.266279   29206 command_runner.go:130] > #  privileged_without_host_devices = false
	I0914 22:13:40.266283   29206 command_runner.go:130] > #  allowed_annotations = []
	I0914 22:13:40.266289   29206 command_runner.go:130] > # Where:
	I0914 22:13:40.266295   29206 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0914 22:13:40.266303   29206 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0914 22:13:40.266312   29206 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 22:13:40.266320   29206 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 22:13:40.266327   29206 command_runner.go:130] > #   in $PATH.
	I0914 22:13:40.266336   29206 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0914 22:13:40.266341   29206 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 22:13:40.266350   29206 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0914 22:13:40.266356   29206 command_runner.go:130] > #   state.
	I0914 22:13:40.266362   29206 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 22:13:40.266370   29206 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 22:13:40.266379   29206 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 22:13:40.266386   29206 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 22:13:40.266394   29206 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 22:13:40.266403   29206 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 22:13:40.266410   29206 command_runner.go:130] > #   The currently recognized values are:
	I0914 22:13:40.266417   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 22:13:40.266426   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 22:13:40.266434   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 22:13:40.266442   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 22:13:40.266450   29206 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 22:13:40.266465   29206 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 22:13:40.266480   29206 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 22:13:40.266494   29206 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0914 22:13:40.266505   29206 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 22:13:40.266515   29206 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 22:13:40.266522   29206 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0914 22:13:40.266532   29206 command_runner.go:130] > runtime_type = "oci"
	I0914 22:13:40.266543   29206 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 22:13:40.266552   29206 command_runner.go:130] > runtime_config_path = ""
	I0914 22:13:40.266559   29206 command_runner.go:130] > monitor_path = ""
	I0914 22:13:40.266566   29206 command_runner.go:130] > monitor_cgroup = ""
	I0914 22:13:40.266571   29206 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 22:13:40.266579   29206 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0914 22:13:40.266585   29206 command_runner.go:130] > # running containers
	I0914 22:13:40.266590   29206 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0914 22:13:40.266599   29206 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0914 22:13:40.266626   29206 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0914 22:13:40.266634   29206 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0914 22:13:40.266640   29206 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0914 22:13:40.266645   29206 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0914 22:13:40.266650   29206 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0914 22:13:40.266656   29206 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0914 22:13:40.266661   29206 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0914 22:13:40.266668   29206 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0914 22:13:40.266674   29206 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 22:13:40.266683   29206 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 22:13:40.266689   29206 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 22:13:40.266696   29206 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 22:13:40.266705   29206 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 22:13:40.266713   29206 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 22:13:40.266724   29206 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 22:13:40.266734   29206 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 22:13:40.266742   29206 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 22:13:40.266752   29206 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 22:13:40.266758   29206 command_runner.go:130] > # Example:
	I0914 22:13:40.266763   29206 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 22:13:40.266770   29206 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 22:13:40.266775   29206 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 22:13:40.266787   29206 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 22:13:40.266794   29206 command_runner.go:130] > # cpuset = 0
	I0914 22:13:40.266804   29206 command_runner.go:130] > # cpushares = "0-1"
	I0914 22:13:40.266814   29206 command_runner.go:130] > # Where:
	I0914 22:13:40.266824   29206 command_runner.go:130] > # The workload name is workload-type.
	I0914 22:13:40.266838   29206 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 22:13:40.266850   29206 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 22:13:40.266865   29206 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 22:13:40.266880   29206 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 22:13:40.266894   29206 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 22:13:40.266902   29206 command_runner.go:130] > # 
	I0914 22:13:40.266914   29206 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 22:13:40.266922   29206 command_runner.go:130] > #
	I0914 22:13:40.266934   29206 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 22:13:40.266948   29206 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 22:13:40.266961   29206 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 22:13:40.266972   29206 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 22:13:40.266981   29206 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 22:13:40.266987   29206 command_runner.go:130] > [crio.image]
	I0914 22:13:40.266994   29206 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 22:13:40.267005   29206 command_runner.go:130] > # default_transport = "docker://"
	I0914 22:13:40.267018   29206 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 22:13:40.267031   29206 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:13:40.267042   29206 command_runner.go:130] > # global_auth_file = ""
	I0914 22:13:40.267054   29206 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 22:13:40.267063   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:13:40.267075   29206 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0914 22:13:40.267089   29206 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 22:13:40.267102   29206 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:13:40.267114   29206 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:13:40.267123   29206 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 22:13:40.267131   29206 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 22:13:40.267140   29206 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 22:13:40.267146   29206 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 22:13:40.267154   29206 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 22:13:40.267161   29206 command_runner.go:130] > # pause_command = "/pause"
	I0914 22:13:40.267167   29206 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 22:13:40.267176   29206 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 22:13:40.267184   29206 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 22:13:40.267193   29206 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 22:13:40.267203   29206 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 22:13:40.267212   29206 command_runner.go:130] > # signature_policy = ""
	I0914 22:13:40.267220   29206 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 22:13:40.267231   29206 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 22:13:40.267237   29206 command_runner.go:130] > # changing them here.
	I0914 22:13:40.267242   29206 command_runner.go:130] > # insecure_registries = [
	I0914 22:13:40.267246   29206 command_runner.go:130] > # ]
	I0914 22:13:40.267259   29206 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 22:13:40.267267   29206 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 22:13:40.267274   29206 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 22:13:40.267279   29206 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 22:13:40.267286   29206 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 22:13:40.267294   29206 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 22:13:40.267300   29206 command_runner.go:130] > # CNI plugins.
	I0914 22:13:40.267305   29206 command_runner.go:130] > [crio.network]
	I0914 22:13:40.267313   29206 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 22:13:40.267321   29206 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 22:13:40.267328   29206 command_runner.go:130] > # cni_default_network = ""
	I0914 22:13:40.267334   29206 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 22:13:40.267341   29206 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 22:13:40.267346   29206 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 22:13:40.267352   29206 command_runner.go:130] > # plugin_dirs = [
	I0914 22:13:40.267356   29206 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 22:13:40.267362   29206 command_runner.go:130] > # ]
	I0914 22:13:40.267368   29206 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 22:13:40.267375   29206 command_runner.go:130] > [crio.metrics]
	I0914 22:13:40.267380   29206 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 22:13:40.267386   29206 command_runner.go:130] > enable_metrics = true
	I0914 22:13:40.267391   29206 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 22:13:40.267398   29206 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 22:13:40.267404   29206 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 22:13:40.267413   29206 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 22:13:40.267422   29206 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 22:13:40.267429   29206 command_runner.go:130] > # metrics_collectors = [
	I0914 22:13:40.267433   29206 command_runner.go:130] > # 	"operations",
	I0914 22:13:40.267437   29206 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 22:13:40.267444   29206 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 22:13:40.267448   29206 command_runner.go:130] > # 	"operations_errors",
	I0914 22:13:40.267454   29206 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 22:13:40.267459   29206 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 22:13:40.267484   29206 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 22:13:40.267493   29206 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 22:13:40.267502   29206 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 22:13:40.267509   29206 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 22:13:40.267513   29206 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 22:13:40.267518   29206 command_runner.go:130] > # 	"containers_oom_total",
	I0914 22:13:40.267523   29206 command_runner.go:130] > # 	"containers_oom",
	I0914 22:13:40.267527   29206 command_runner.go:130] > # 	"processes_defunct",
	I0914 22:13:40.267533   29206 command_runner.go:130] > # 	"operations_total",
	I0914 22:13:40.267538   29206 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 22:13:40.267545   29206 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 22:13:40.267549   29206 command_runner.go:130] > # 	"operations_errors_total",
	I0914 22:13:40.267556   29206 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 22:13:40.267562   29206 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 22:13:40.267569   29206 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 22:13:40.267573   29206 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 22:13:40.267580   29206 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 22:13:40.267584   29206 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 22:13:40.267590   29206 command_runner.go:130] > # ]
	I0914 22:13:40.267596   29206 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 22:13:40.267602   29206 command_runner.go:130] > # metrics_port = 9090
	I0914 22:13:40.267607   29206 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 22:13:40.267614   29206 command_runner.go:130] > # metrics_socket = ""
	I0914 22:13:40.267619   29206 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 22:13:40.267628   29206 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 22:13:40.267635   29206 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 22:13:40.267642   29206 command_runner.go:130] > # certificate on any modification event.
	I0914 22:13:40.267646   29206 command_runner.go:130] > # metrics_cert = ""
	I0914 22:13:40.267653   29206 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 22:13:40.267658   29206 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 22:13:40.267663   29206 command_runner.go:130] > # metrics_key = ""
	I0914 22:13:40.267670   29206 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 22:13:40.267674   29206 command_runner.go:130] > [crio.tracing]
	I0914 22:13:40.267680   29206 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 22:13:40.267686   29206 command_runner.go:130] > # enable_tracing = false
	I0914 22:13:40.267692   29206 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 22:13:40.267699   29206 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 22:13:40.267705   29206 command_runner.go:130] > # Number of samples to collect per million spans.
	I0914 22:13:40.267711   29206 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 22:13:40.267717   29206 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 22:13:40.267724   29206 command_runner.go:130] > [crio.stats]
	I0914 22:13:40.267730   29206 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 22:13:40.267737   29206 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 22:13:40.267744   29206 command_runner.go:130] > # stats_collection_period = 0
	I0914 22:13:40.267815   29206 cni.go:84] Creating CNI manager for ""
	I0914 22:13:40.267826   29206 cni.go:136] 3 nodes found, recommending kindnet
	I0914 22:13:40.267834   29206 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:13:40.267851   29206 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-124911 NodeName:multinode-124911-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:13:40.267952   29206 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-124911-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:13:40.267998   29206 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-124911-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:13:40.268044   29206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:13:40.276247   29206 command_runner.go:130] > kubeadm
	I0914 22:13:40.276266   29206 command_runner.go:130] > kubectl
	I0914 22:13:40.276273   29206 command_runner.go:130] > kubelet
	I0914 22:13:40.276310   29206 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:13:40.276366   29206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0914 22:13:40.284119   29206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0914 22:13:40.299292   29206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:13:40.314580   29206 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0914 22:13:40.318396   29206 command_runner.go:130] > 192.168.39.116	control-plane.minikube.internal
	I0914 22:13:40.318461   29206 host.go:66] Checking if "multinode-124911" exists ...
	I0914 22:13:40.318750   29206 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:13:40.318873   29206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:13:40.318915   29206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:13:40.334911   29206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39581
	I0914 22:13:40.335281   29206 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:13:40.335897   29206 main.go:141] libmachine: Using API Version  1
	I0914 22:13:40.335936   29206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:13:40.336288   29206 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:13:40.336458   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:13:40.336626   29206 start.go:304] JoinCluster: &{Name:multinode-124911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-124911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.254 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:13:40.336730   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0914 22:13:40.336744   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:13:40.339581   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:13:40.340138   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:13:40.340170   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:13:40.340367   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:13:40.340550   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:13:40.340749   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:13:40.340940   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 22:13:40.525173   29206 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 96lkp5.kpt8275y3ftok572 --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:13:40.527572   29206 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0914 22:13:40.527607   29206 host.go:66] Checking if "multinode-124911" exists ...
	I0914 22:13:40.527887   29206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:13:40.527922   29206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:13:40.542517   29206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I0914 22:13:40.542971   29206 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:13:40.543441   29206 main.go:141] libmachine: Using API Version  1
	I0914 22:13:40.543480   29206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:13:40.543807   29206 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:13:40.544003   29206 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:13:40.544203   29206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-124911-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0914 22:13:40.544229   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:13:40.547454   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:13:40.547907   29206 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:09:35 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:13:40.547936   29206 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:13:40.548104   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:13:40.548236   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:13:40.548402   29206 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:13:40.548541   29206 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 22:13:40.744170   29206 command_runner.go:130] > node/multinode-124911-m03 cordoned
	I0914 22:13:43.797415   29206 command_runner.go:130] > pod "busybox-5bc68d56bd-c9cz8" has DeletionTimestamp older than 1 seconds, skipping
	I0914 22:13:43.797461   29206 command_runner.go:130] > node/multinode-124911-m03 drained
	I0914 22:13:43.799483   29206 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0914 22:13:43.799509   29206 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-vjv8m, kube-system/kube-proxy-5tcff
	I0914 22:13:43.799534   29206 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-124911-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.255306834s)
	I0914 22:13:43.799553   29206 node.go:108] successfully drained node "m03"
	I0914 22:13:43.800003   29206 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:13:43.800215   29206 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:13:43.800452   29206 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0914 22:13:43.800493   29206 round_trippers.go:463] DELETE https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:13:43.800501   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:43.800508   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:43.800514   29206 round_trippers.go:473]     Content-Type: application/json
	I0914 22:13:43.800522   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:43.813337   29206 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0914 22:13:43.813353   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:43.813359   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:43.813365   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:43.813370   29206 round_trippers.go:580]     Content-Length: 171
	I0914 22:13:43.813375   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:43 GMT
	I0914 22:13:43.813380   29206 round_trippers.go:580]     Audit-Id: 3542a72a-3be6-4d53-8a74-ca3afca66b34
	I0914 22:13:43.813388   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:43.813393   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:43.813410   29206 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-124911-m03","kind":"nodes","uid":"5e8b04da-e8ae-403d-9e94-bb008093a0b9"}}
	I0914 22:13:43.813440   29206 node.go:124] successfully deleted node "m03"
	I0914 22:13:43.813449   29206 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0914 22:13:43.813465   29206 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0914 22:13:43.813483   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 96lkp5.kpt8275y3ftok572 --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-124911-m03"
	I0914 22:13:43.863349   29206 command_runner.go:130] ! W0914 22:13:43.854198    2317 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0914 22:13:43.863677   29206 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0914 22:13:43.988871   29206 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0914 22:13:43.988904   29206 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0914 22:13:44.756607   29206 command_runner.go:130] > [preflight] Running pre-flight checks
	I0914 22:13:44.756634   29206 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0914 22:13:44.756648   29206 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0914 22:13:44.756661   29206 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:13:44.756673   29206 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:13:44.756685   29206 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0914 22:13:44.756696   29206 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0914 22:13:44.756709   29206 command_runner.go:130] > This node has joined the cluster:
	I0914 22:13:44.756723   29206 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0914 22:13:44.756735   29206 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0914 22:13:44.756749   29206 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0914 22:13:44.756792   29206 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0914 22:13:45.023503   29206 start.go:306] JoinCluster complete in 4.68687136s
	I0914 22:13:45.023534   29206 cni.go:84] Creating CNI manager for ""
	I0914 22:13:45.023541   29206 cni.go:136] 3 nodes found, recommending kindnet
	I0914 22:13:45.023593   29206 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 22:13:45.028661   29206 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0914 22:13:45.028690   29206 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0914 22:13:45.028701   29206 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0914 22:13:45.028710   29206 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:13:45.028719   29206 command_runner.go:130] > Access: 2023-09-14 22:09:36.137726050 +0000
	I0914 22:13:45.028728   29206 command_runner.go:130] > Modify: 2023-09-13 23:09:37.000000000 +0000
	I0914 22:13:45.028738   29206 command_runner.go:130] > Change: 2023-09-14 22:09:34.480726050 +0000
	I0914 22:13:45.028748   29206 command_runner.go:130] >  Birth: -
	I0914 22:13:45.028791   29206 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 22:13:45.028803   29206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 22:13:45.046085   29206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 22:13:45.385480   29206 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0914 22:13:45.385502   29206 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0914 22:13:45.385507   29206 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0914 22:13:45.385512   29206 command_runner.go:130] > daemonset.apps/kindnet configured
	I0914 22:13:45.385831   29206 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:13:45.386044   29206 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:13:45.386333   29206 round_trippers.go:463] GET https://192.168.39.116:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 22:13:45.386350   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.386360   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.386368   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.393365   29206 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 22:13:45.393389   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.393397   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.393402   29206 round_trippers.go:580]     Content-Length: 291
	I0914 22:13:45.393407   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.393412   29206 round_trippers.go:580]     Audit-Id: 98dbd508-279e-4648-beba-63e007e33262
	I0914 22:13:45.393417   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.393422   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.393427   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.393447   29206 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d40ee9-9834-4f82-84c2-51e3c14c181f","resourceVersion":"894","creationTimestamp":"2023-09-14T21:59:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0914 22:13:45.393533   29206 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-124911" context rescaled to 1 replicas
	I0914 22:13:45.393569   29206 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0914 22:13:45.395478   29206 out.go:177] * Verifying Kubernetes components...
	I0914 22:13:45.397060   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:13:45.410650   29206 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:13:45.410900   29206 kapi.go:59] client config for multinode-124911: &rest.Config{Host:"https://192.168.39.116:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/multinode-124911/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:13:45.411110   29206 node_ready.go:35] waiting up to 6m0s for node "multinode-124911-m03" to be "Ready" ...
	I0914 22:13:45.411166   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:13:45.411173   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.411180   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.411186   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.413270   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:45.413288   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.413295   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.413300   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.413305   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.413310   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.413315   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.413322   29206 round_trippers.go:580]     Audit-Id: 4cfec591-92be-4691-83c0-410aa45e2c5c
	I0914 22:13:45.413523   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m03","uid":"f2b42e9b-5b3c-418c-b9cc-2ed5e12a4a61","resourceVersion":"1215","creationTimestamp":"2023-09-14T22:13:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:13:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:13:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0914 22:13:45.413847   29206 node_ready.go:49] node "multinode-124911-m03" has status "Ready":"True"
	I0914 22:13:45.413864   29206 node_ready.go:38] duration metric: took 2.739995ms waiting for node "multinode-124911-m03" to be "Ready" ...
	I0914 22:13:45.413872   29206 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:13:45.413921   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods
	I0914 22:13:45.413930   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.413937   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.413942   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.417523   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:13:45.417537   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.417544   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.417550   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.417561   29206 round_trippers.go:580]     Audit-Id: e87bdcd9-98a2-4ad2-bd64-05145afa82d2
	I0914 22:13:45.417575   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.417586   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.417602   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.419104   29206 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"890","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82087 chars]
	I0914 22:13:45.421465   29206 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:45.421527   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssj9q
	I0914 22:13:45.421540   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.421548   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.421557   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.423537   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:13:45.423551   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.423557   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.423562   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.423567   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.423572   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.423581   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.423590   29206 round_trippers.go:580]     Audit-Id: bd5c6b43-912a-487a-bdc4-37072aaad5ed
	I0914 22:13:45.423929   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssj9q","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"aadacae8-9f4d-4c24-91c7-78a88d187f73","resourceVersion":"890","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d567a7ef-8f18-493b-b351-aacd96e06f67","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d567a7ef-8f18-493b-b351-aacd96e06f67\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0914 22:13:45.424299   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:13:45.424310   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.424317   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.424323   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.426267   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:13:45.426293   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.426302   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.426310   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.426319   29206 round_trippers.go:580]     Audit-Id: 74c0b0a7-c7e1-45dd-9c6a-a8096c37d403
	I0914 22:13:45.426331   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.426340   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.426352   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.426529   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:13:45.426942   29206 pod_ready.go:92] pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace has status "Ready":"True"
	I0914 22:13:45.426962   29206 pod_ready.go:81] duration metric: took 5.479059ms waiting for pod "coredns-5dd5756b68-ssj9q" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:45.426974   29206 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:45.427029   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-124911
	I0914 22:13:45.427039   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.427049   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.427058   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.429251   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:45.429265   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.429271   29206 round_trippers.go:580]     Audit-Id: a1832b0c-82fe-4ce7-af8b-9139a1f0074d
	I0914 22:13:45.429276   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.429282   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.429287   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.429292   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.429300   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.429457   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-124911","namespace":"kube-system","uid":"1b195f1a-48a6-4b46-a819-2aeb9fe4e00c","resourceVersion":"882","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.116:2379","kubernetes.io/config.hash":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.mirror":"87beacc0664a01f1abb8543be732cb2e","kubernetes.io/config.seen":"2023-09-14T21:59:20.641783376Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0914 22:13:45.429813   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:13:45.429829   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.429838   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.429846   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.431675   29206 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:13:45.431695   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.431704   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.431712   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.431719   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.431727   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.431736   29206 round_trippers.go:580]     Audit-Id: 5dfb52a7-64ac-41e5-8dd9-d137075ea3e6
	I0914 22:13:45.431745   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.431944   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:13:45.432317   29206 pod_ready.go:92] pod "etcd-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:13:45.432332   29206 pod_ready.go:81] duration metric: took 5.34887ms waiting for pod "etcd-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:45.432355   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:45.432413   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124911
	I0914 22:13:45.432423   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.432434   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.432448   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.440147   29206 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 22:13:45.440161   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.440174   29206 round_trippers.go:580]     Audit-Id: 84ed3191-6c6f-44f2-a586-804f31fbc5e7
	I0914 22:13:45.440185   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.440192   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.440203   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.440213   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.440221   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.440370   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-124911","namespace":"kube-system","uid":"e9a93d33-82f3-4cfe-9b2c-92560dd09d09","resourceVersion":"849","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.116:8443","kubernetes.io/config.hash":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.mirror":"45ad3e9dc71d2c9a455002dbdc235854","kubernetes.io/config.seen":"2023-09-14T21:59:20.641778793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0914 22:13:45.440804   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:13:45.440820   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.440828   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.440835   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.445463   29206 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:13:45.445475   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.445481   29206 round_trippers.go:580]     Audit-Id: 24c1cbc5-b46e-4279-abea-c498117a6a0a
	I0914 22:13:45.445486   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.445491   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.445497   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.445506   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.445517   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.445774   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:13:45.446140   29206 pod_ready.go:92] pod "kube-apiserver-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:13:45.446155   29206 pod_ready.go:81] duration metric: took 13.788143ms waiting for pod "kube-apiserver-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:45.446167   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:45.446212   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124911
	I0914 22:13:45.446222   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.446233   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.446243   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.448638   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:45.448648   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.448654   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.448659   29206 round_trippers.go:580]     Audit-Id: 68978773-52e9-4d84-b94c-e0ee3f706ee9
	I0914 22:13:45.448664   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.448669   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.448674   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.448682   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.448933   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-124911","namespace":"kube-system","uid":"3efae123-9cdd-457a-a317-77370a6c5288","resourceVersion":"854","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.mirror":"0364c35ea02d584f30ca0c3d8a47dfb6","kubernetes.io/config.seen":"2023-09-14T21:59:20.641781682Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0914 22:13:45.449304   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:13:45.449316   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.449323   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.449331   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.452083   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:45.452097   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.452104   29206 round_trippers.go:580]     Audit-Id: 19bc028e-de81-49de-b3a9-1a23d315765e
	I0914 22:13:45.452109   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.452114   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.452120   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.452128   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.452137   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.452322   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:13:45.452605   29206 pod_ready.go:92] pod "kube-controller-manager-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:13:45.452618   29206 pod_ready.go:81] duration metric: took 6.443907ms waiting for pod "kube-controller-manager-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:45.452626   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:45.611519   29206 request.go:629] Waited for 158.846858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kd4p
	I0914 22:13:45.611584   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kd4p
	I0914 22:13:45.611589   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.611607   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.611615   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.614134   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:45.614152   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.614159   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.614164   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.614171   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.614179   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.614187   29206 round_trippers.go:580]     Audit-Id: 4ec906a5-fc42-4e8c-8755-b2d2fa2531ae
	I0914 22:13:45.614199   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.614361   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2kd4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"de9e2ee3-364a-447b-9d7f-be85d86838ae","resourceVersion":"820","creationTimestamp":"2023-09-14T21:59:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0914 22:13:45.812164   29206 request.go:629] Waited for 197.359292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:13:45.812245   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:13:45.812256   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:45.812270   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:45.812283   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:45.815726   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:13:45.815751   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:45.815762   29206 round_trippers.go:580]     Audit-Id: 4edbf326-a20a-4de8-9405-2b6d7e8d27a3
	I0914 22:13:45.815770   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:45.815778   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:45.815786   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:45.815794   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:45.815806   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:45 GMT
	I0914 22:13:45.815959   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:13:45.816285   29206 pod_ready.go:92] pod "kube-proxy-2kd4p" in "kube-system" namespace has status "Ready":"True"
	I0914 22:13:45.816302   29206 pod_ready.go:81] duration metric: took 363.670794ms waiting for pod "kube-proxy-2kd4p" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:45.816313   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5tcff" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:46.011743   29206 request.go:629] Waited for 195.371587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:13:46.011812   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:13:46.011820   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:46.011829   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:46.011837   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:46.014610   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:46.014633   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:46.014643   29206 round_trippers.go:580]     Audit-Id: 1d989171-4a1d-4f22-8e3f-120d5d08cc86
	I0914 22:13:46.014652   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:46.014660   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:46.014668   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:46.014676   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:46.014684   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:46 GMT
	I0914 22:13:46.014907   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5tcff","generateName":"kube-proxy-","namespace":"kube-system","uid":"bfc8d22f-954e-4a49-878e-9d1711d49c40","resourceVersion":"1218","creationTimestamp":"2023-09-14T22:01:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0914 22:13:46.211852   29206 request.go:629] Waited for 196.395358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:13:46.211946   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:13:46.211963   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:46.211977   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:46.211991   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:46.215072   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:13:46.215098   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:46.215108   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:46.215117   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:46.215126   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:46.215143   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:46.215151   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:46 GMT
	I0914 22:13:46.215163   29206 round_trippers.go:580]     Audit-Id: 95b91532-d345-44e2-a264-02a07e3fa871
	I0914 22:13:46.215325   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m03","uid":"f2b42e9b-5b3c-418c-b9cc-2ed5e12a4a61","resourceVersion":"1215","creationTimestamp":"2023-09-14T22:13:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:13:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:13:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0914 22:13:46.411929   29206 request.go:629] Waited for 196.19778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:13:46.411999   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:13:46.412007   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:46.412019   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:46.412029   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:46.415326   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:13:46.415347   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:46.415357   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:46.415366   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:46 GMT
	I0914 22:13:46.415372   29206 round_trippers.go:580]     Audit-Id: 8cdfcff2-c152-41b5-aca6-84260106d6ac
	I0914 22:13:46.415380   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:46.415387   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:46.415395   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:46.415522   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5tcff","generateName":"kube-proxy-","namespace":"kube-system","uid":"bfc8d22f-954e-4a49-878e-9d1711d49c40","resourceVersion":"1218","creationTimestamp":"2023-09-14T22:01:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0914 22:13:46.611754   29206 request.go:629] Waited for 195.696248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:13:46.611810   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:13:46.611815   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:46.611822   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:46.611828   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:46.614028   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:46.614049   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:46.614058   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:46.614066   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:46.614072   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:46 GMT
	I0914 22:13:46.614083   29206 round_trippers.go:580]     Audit-Id: 85d4d6de-dce7-4217-b234-e7ef029f8be6
	I0914 22:13:46.614094   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:46.614106   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:46.614321   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m03","uid":"f2b42e9b-5b3c-418c-b9cc-2ed5e12a4a61","resourceVersion":"1215","creationTimestamp":"2023-09-14T22:13:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:13:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:13:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0914 22:13:47.115356   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tcff
	I0914 22:13:47.115379   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:47.115387   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:47.115397   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:47.118230   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:47.118251   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:47.118261   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:47.118267   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:47.118273   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:47.118279   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:47 GMT
	I0914 22:13:47.118287   29206 round_trippers.go:580]     Audit-Id: 8e3e14b9-47c2-45b3-ae06-c328d42197b8
	I0914 22:13:47.118296   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:47.118868   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5tcff","generateName":"kube-proxy-","namespace":"kube-system","uid":"bfc8d22f-954e-4a49-878e-9d1711d49c40","resourceVersion":"1234","creationTimestamp":"2023-09-14T22:01:33Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0914 22:13:47.119239   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m03
	I0914 22:13:47.119249   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:47.119256   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:47.119261   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:47.121415   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:47.121438   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:47.121448   29206 round_trippers.go:580]     Audit-Id: 11bfddef-4441-453b-a301-f0e80e88e46b
	I0914 22:13:47.121455   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:47.121460   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:47.121465   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:47.121471   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:47.121480   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:47 GMT
	I0914 22:13:47.121598   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m03","uid":"f2b42e9b-5b3c-418c-b9cc-2ed5e12a4a61","resourceVersion":"1215","creationTimestamp":"2023-09-14T22:13:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:13:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:13:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0914 22:13:47.121836   29206 pod_ready.go:92] pod "kube-proxy-5tcff" in "kube-system" namespace has status "Ready":"True"
	I0914 22:13:47.121850   29206 pod_ready.go:81] duration metric: took 1.305531825s waiting for pod "kube-proxy-5tcff" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:47.121863   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4qjg" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:47.212226   29206 request.go:629] Waited for 90.317084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4qjg
	I0914 22:13:47.212276   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4qjg
	I0914 22:13:47.212281   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:47.212312   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:47.212324   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:47.215287   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:47.215304   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:47.215310   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:47.215316   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:47.215321   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:47.215326   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:47 GMT
	I0914 22:13:47.215337   29206 round_trippers.go:580]     Audit-Id: 1fcd34a0-9cb7-4e04-b67a-29f51ada4b71
	I0914 22:13:47.215357   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:47.215929   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4qjg","generateName":"kube-proxy-","namespace":"kube-system","uid":"8214b42e-6656-4e01-bc47-82d6ab6592e5","resourceVersion":"1061","creationTimestamp":"2023-09-14T22:00:41Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70a82ec8-2bff-4ca4-a0e8-b2e2a3fc8ec0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0914 22:13:47.411693   29206 request.go:629] Waited for 195.364196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:13:47.411750   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911-m02
	I0914 22:13:47.411755   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:47.411762   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:47.411768   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:47.414422   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:47.414445   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:47.414455   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:47 GMT
	I0914 22:13:47.414463   29206 round_trippers.go:580]     Audit-Id: 8ea73890-b4d4-4439-8710-1bfe59188c14
	I0914 22:13:47.414470   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:47.414478   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:47.414487   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:47.414495   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:47.414955   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911-m02","uid":"8e34404b-42e6-43f4-a225-55ff2168406c","resourceVersion":"1041","creationTimestamp":"2023-09-14T22:12:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:12:04Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0914 22:13:47.415198   29206 pod_ready.go:92] pod "kube-proxy-c4qjg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:13:47.415212   29206 pod_ready.go:81] duration metric: took 293.342758ms waiting for pod "kube-proxy-c4qjg" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:47.415221   29206 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:47.611633   29206 request.go:629] Waited for 196.346004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 22:13:47.611683   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124911
	I0914 22:13:47.611688   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:47.611695   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:47.611702   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:47.614518   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:47.614533   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:47.614539   29206 round_trippers.go:580]     Audit-Id: b10081f0-bbe1-4026-8745-88f42a4b04eb
	I0914 22:13:47.614544   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:47.614550   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:47.614555   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:47.614561   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:47.614566   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:47 GMT
	I0914 22:13:47.614831   29206 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-124911","namespace":"kube-system","uid":"f8d502b7-9ee7-474e-ab64-9f721ee6970e","resourceVersion":"864","creationTimestamp":"2023-09-14T21:59:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.mirror":"1c19e8d6787ee446a44e05a606bee863","kubernetes.io/config.seen":"2023-09-14T21:59:20.641782607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T21:59:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0914 22:13:47.811536   29206 request.go:629] Waited for 196.317402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:13:47.811635   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes/multinode-124911
	I0914 22:13:47.811645   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:47.811652   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:47.811658   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:47.814566   29206 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:13:47.814583   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:47.814588   29206 round_trippers.go:580]     Audit-Id: 7a63a506-33e6-42e5-a1a5-3540a4cdbc4e
	I0914 22:13:47.814594   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:47.814600   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:47.814608   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:47.814621   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:47.814633   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:47 GMT
	I0914 22:13:47.814809   29206 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T21:59:17Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0914 22:13:47.815122   29206 pod_ready.go:92] pod "kube-scheduler-multinode-124911" in "kube-system" namespace has status "Ready":"True"
	I0914 22:13:47.815136   29206 pod_ready.go:81] duration metric: took 399.909958ms waiting for pod "kube-scheduler-multinode-124911" in "kube-system" namespace to be "Ready" ...
	I0914 22:13:47.815146   29206 pod_ready.go:38] duration metric: took 2.401264742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:13:47.815166   29206 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:13:47.815220   29206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:13:47.829148   29206 system_svc.go:56] duration metric: took 13.97664ms WaitForService to wait for kubelet.
	I0914 22:13:47.829170   29206 kubeadm.go:581] duration metric: took 2.435571735s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:13:47.829190   29206 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:13:48.011631   29206 request.go:629] Waited for 182.377961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.116:8443/api/v1/nodes
	I0914 22:13:48.011713   29206 round_trippers.go:463] GET https://192.168.39.116:8443/api/v1/nodes
	I0914 22:13:48.011720   29206 round_trippers.go:469] Request Headers:
	I0914 22:13:48.011729   29206 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:13:48.011738   29206 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 22:13:48.015167   29206 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:13:48.015194   29206 round_trippers.go:577] Response Headers:
	I0914 22:13:48.015205   29206 round_trippers.go:580]     Audit-Id: cc0e42d8-1809-4ca6-ae77-e9d7b6d2ef97
	I0914 22:13:48.015220   29206 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:13:48.015228   29206 round_trippers.go:580]     Content-Type: application/json
	I0914 22:13:48.015235   29206 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 06317c60-1bb3-4f5c-8f7d-931bf6f26987
	I0914 22:13:48.015244   29206 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 528a111e-0fac-4e55-bb07-44a60c1d2695
	I0914 22:13:48.015252   29206 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:13:48 GMT
	I0914 22:13:48.015998   29206 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1237"},"items":[{"metadata":{"name":"multinode-124911","uid":"b327f595-d29a-489b-a46e-64b31048819c","resourceVersion":"909","creationTimestamp":"2023-09-14T21:59:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124911","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-124911","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T21_59_21_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15418 chars]
	I0914 22:13:48.016533   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:13:48.016551   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:13:48.016560   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:13:48.016564   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:13:48.016567   29206 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:13:48.016571   29206 node_conditions.go:123] node cpu capacity is 2
	I0914 22:13:48.016574   29206 node_conditions.go:105] duration metric: took 187.38032ms to run NodePressure ...
	I0914 22:13:48.016584   29206 start.go:228] waiting for startup goroutines ...
	I0914 22:13:48.016601   29206 start.go:242] writing updated cluster config ...
	I0914 22:13:48.016869   29206 ssh_runner.go:195] Run: rm -f paused
	I0914 22:13:48.063634   29206 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:13:48.066259   29206 out.go:177] * Done! kubectl is now configured to use "multinode-124911" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:09:35 UTC, ends at Thu 2023-09-14 22:13:49 UTC. --
	Sep 14 22:13:48 multinode-124911 crio[705]: time="2023-09-14 22:13:48.691286485Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:134152eff658f381d87b8d373919907e94199386eef178ea6d76653e825181b4,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-pmkvp,Uid:854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694729423922169666,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:10:07.827685668Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:711014c2d0ef16af5b2bdf0166945215d1a772da0f38564e6294b947c79e525d,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-ssj9q,Uid:aadacae8-9f4d-4c24-91c7-78a88d187f73,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1694729423624795859,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:10:07.827686772Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2dab1c0ab110b1ed92c8ecbb43eb69f78f470a010eeec8cd99a72e14902dc36,Metadata:&PodSandboxMetadata{Name:kindnet-274xj,Uid:6d12f7c0-2ad9-436f-ab5d-528c4823865c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694729408212240560,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2023-09-14T22:10:07.827689936Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:208653ff848170d07f41c6228051bc069b39f2e9445f93dca14aac0000a85fc6,Metadata:&PodSandboxMetadata{Name:kube-proxy-2kd4p,Uid:de9e2ee3-364a-447b-9d7f-be85d86838ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694729408200069287,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9e2ee3-364a-447b-9d7f-be85d86838ae,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:10:07.827690996Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:aada9d30-e15d-4405-a7e2-e979dd4b8e0d,Namespace:kube-system,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1694729408184020184,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tm
p\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-14T22:10:07.827684244Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7dfbf5ebe3a0cd6deb716bba70f55dd52f037b62edd5d89d9aea94bdc45cef36,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-124911,Uid:0364c35ea02d584f30ca0c3d8a47dfb6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694729402384246044,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0364c35ea02d584f30ca0c3d8a47dfb6,kubernetes.io/config.seen: 2023-09-14T22:10:01.834375373Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b2fb8ef99a77ae91515d7515117e6558060bb13bea77a8f30ccc9d2a107ef85,Metadata:&PodSandboxMetada
ta{Name:kube-scheduler-multinode-124911,Uid:1c19e8d6787ee446a44e05a606bee863,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694729402372861004,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19e8d6787ee446a44e05a606bee863,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1c19e8d6787ee446a44e05a606bee863,kubernetes.io/config.seen: 2023-09-14T22:10:01.834377274Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:780a8897ef9d508acfd69feba123ed863fc95fcbd4fe2aa4d3823137fe398fcf,Metadata:&PodSandboxMetadata{Name:etcd-multinode-124911,Uid:87beacc0664a01f1abb8543be732cb2e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694729402357266968,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.116:2379,kubernetes.io/config.hash: 87beacc0664a01f1abb8543be732cb2e,kubernetes.io/config.seen: 2023-09-14T22:10:01.834320228Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:174eb38dd000af544a6339e18153ddade4c0516eb8e236adf1fb5f412b365b70,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-124911,Uid:45ad3e9dc71d2c9a455002dbdc235854,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694729402346035218,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.116:8443,kubern
etes.io/config.hash: 45ad3e9dc71d2c9a455002dbdc235854,kubernetes.io/config.seen: 2023-09-14T22:10:01.834323431Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=24817fd6-e8df-4256-884c-6303e0dca6bb name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 22:13:48 multinode-124911 crio[705]: time="2023-09-14 22:13:48.692596559Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=58d68506-cd4d-4f08-81cd-58aca2c82c08 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:13:48 multinode-124911 crio[705]: time="2023-09-14 22:13:48.692667555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=58d68506-cd4d-4f08-81cd-58aca2c82c08 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:13:48 multinode-124911 crio[705]: time="2023-09-14 22:13:48.693033102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98c2c492d335fc0cfd6c207d61f0e24228f7e911ee0d922cdff0fbd73967d560,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694729440080950772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3182cd6404df57b460d7bd993044b348a732125988484cd184ad6f87f57251b,PodSandboxId:134152eff658f381d87b8d373919907e94199386eef178ea6d76653e825181b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694729427120849448,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816cc7b2db9f787c7265aba76d86c8117eae89d2ca46ab565b976be07395c652,PodSandboxId:711014c2d0ef16af5b2bdf0166945215d1a772da0f38564e6294b947c79e525d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694729424238398698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9f8b500eed7ea5ebe97293d8c6aff560b651aeb7790f558769d9d01be72c48,PodSandboxId:e2dab1c0ab110b1ed92c8ecbb43eb69f78f470a010eeec8cd99a72e14902dc36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694729415218916225,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42eb95cb73086989cb0e663d859f4be9aeff063edb818494536a6f2e54af981f,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694729408973401057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2e5e8dcb9235af730a1ece0f7a701918bc7f4af912d02df99ac721d0a4903d,PodSandboxId:208653ff848170d07f41c6228051bc069b39f2e9445f93dca14aac0000a85fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694729408856384927,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03037d228259d8cc49c35183b5b5a93ddf0238f604188a15d4805d8957e831d,PodSandboxId:780a8897ef9d508acfd69feba123ed863fc95fcbd4fe2aa4d3823137fe398fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694729403660476980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations
:map[string]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad447b4b13bd58dbab45c885ea935570c2e6dfc6d56af5ff71f250fed141fba,PodSandboxId:0b2fb8ef99a77ae91515d7515117e6558060bb13bea77a8f30ccc9d2a107ef85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694729403175246189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19e8d6787ee446a44e05a606bee863,},Annotations:map[string]
string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca46cbcfda644ba995fae700bb3eba9cf95ee6e3b91792d843c988a8a23b6ed2,PodSandboxId:7dfbf5ebe3a0cd6deb716bba70f55dd52f037b62edd5d89d9aea94bdc45cef36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694729403005587846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,}
,Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e874a033de52c19a7a576739d75fe900488fa3db7f2c7dec22f76377f66775a0,PodSandboxId:174eb38dd000af544a6339e18153ddade4c0516eb8e236adf1fb5f412b365b70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694729402795298937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations
:map[string]string{io.kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=58d68506-cd4d-4f08-81cd-58aca2c82c08 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.064052656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=91f534f0-2d0e-4833-b1ea-4e2a910f72a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.064143181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=91f534f0-2d0e-4833-b1ea-4e2a910f72a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.064489683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98c2c492d335fc0cfd6c207d61f0e24228f7e911ee0d922cdff0fbd73967d560,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694729440080950772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3182cd6404df57b460d7bd993044b348a732125988484cd184ad6f87f57251b,PodSandboxId:134152eff658f381d87b8d373919907e94199386eef178ea6d76653e825181b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694729427120849448,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816cc7b2db9f787c7265aba76d86c8117eae89d2ca46ab565b976be07395c652,PodSandboxId:711014c2d0ef16af5b2bdf0166945215d1a772da0f38564e6294b947c79e525d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694729424238398698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9f8b500eed7ea5ebe97293d8c6aff560b651aeb7790f558769d9d01be72c48,PodSandboxId:e2dab1c0ab110b1ed92c8ecbb43eb69f78f470a010eeec8cd99a72e14902dc36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694729415218916225,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42eb95cb73086989cb0e663d859f4be9aeff063edb818494536a6f2e54af981f,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694729408973401057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2e5e8dcb9235af730a1ece0f7a701918bc7f4af912d02df99ac721d0a4903d,PodSandboxId:208653ff848170d07f41c6228051bc069b39f2e9445f93dca14aac0000a85fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694729408856384927,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03037d228259d8cc49c35183b5b5a93ddf0238f604188a15d4805d8957e831d,PodSandboxId:780a8897ef9d508acfd69feba123ed863fc95fcbd4fe2aa4d3823137fe398fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694729403660476980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations
:map[string]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad447b4b13bd58dbab45c885ea935570c2e6dfc6d56af5ff71f250fed141fba,PodSandboxId:0b2fb8ef99a77ae91515d7515117e6558060bb13bea77a8f30ccc9d2a107ef85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694729403175246189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19e8d6787ee446a44e05a606bee863,},Annotations:map[string]
string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca46cbcfda644ba995fae700bb3eba9cf95ee6e3b91792d843c988a8a23b6ed2,PodSandboxId:7dfbf5ebe3a0cd6deb716bba70f55dd52f037b62edd5d89d9aea94bdc45cef36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694729403005587846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,}
,Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e874a033de52c19a7a576739d75fe900488fa3db7f2c7dec22f76377f66775a0,PodSandboxId:174eb38dd000af544a6339e18153ddade4c0516eb8e236adf1fb5f412b365b70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694729402795298937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations
:map[string]string{io.kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=91f534f0-2d0e-4833-b1ea-4e2a910f72a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.097193257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ab637644-8c17-4ef5-80df-01877046754a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.097280357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ab637644-8c17-4ef5-80df-01877046754a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.097551225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98c2c492d335fc0cfd6c207d61f0e24228f7e911ee0d922cdff0fbd73967d560,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694729440080950772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3182cd6404df57b460d7bd993044b348a732125988484cd184ad6f87f57251b,PodSandboxId:134152eff658f381d87b8d373919907e94199386eef178ea6d76653e825181b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694729427120849448,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816cc7b2db9f787c7265aba76d86c8117eae89d2ca46ab565b976be07395c652,PodSandboxId:711014c2d0ef16af5b2bdf0166945215d1a772da0f38564e6294b947c79e525d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694729424238398698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9f8b500eed7ea5ebe97293d8c6aff560b651aeb7790f558769d9d01be72c48,PodSandboxId:e2dab1c0ab110b1ed92c8ecbb43eb69f78f470a010eeec8cd99a72e14902dc36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694729415218916225,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42eb95cb73086989cb0e663d859f4be9aeff063edb818494536a6f2e54af981f,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694729408973401057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2e5e8dcb9235af730a1ece0f7a701918bc7f4af912d02df99ac721d0a4903d,PodSandboxId:208653ff848170d07f41c6228051bc069b39f2e9445f93dca14aac0000a85fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694729408856384927,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03037d228259d8cc49c35183b5b5a93ddf0238f604188a15d4805d8957e831d,PodSandboxId:780a8897ef9d508acfd69feba123ed863fc95fcbd4fe2aa4d3823137fe398fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694729403660476980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations
:map[string]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad447b4b13bd58dbab45c885ea935570c2e6dfc6d56af5ff71f250fed141fba,PodSandboxId:0b2fb8ef99a77ae91515d7515117e6558060bb13bea77a8f30ccc9d2a107ef85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694729403175246189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19e8d6787ee446a44e05a606bee863,},Annotations:map[string]
string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca46cbcfda644ba995fae700bb3eba9cf95ee6e3b91792d843c988a8a23b6ed2,PodSandboxId:7dfbf5ebe3a0cd6deb716bba70f55dd52f037b62edd5d89d9aea94bdc45cef36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694729403005587846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,}
,Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e874a033de52c19a7a576739d75fe900488fa3db7f2c7dec22f76377f66775a0,PodSandboxId:174eb38dd000af544a6339e18153ddade4c0516eb8e236adf1fb5f412b365b70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694729402795298937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations
:map[string]string{io.kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ab637644-8c17-4ef5-80df-01877046754a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.128993219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=851a132a-4884-4173-9224-7ece99baba08 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.129078974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=851a132a-4884-4173-9224-7ece99baba08 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.129377269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98c2c492d335fc0cfd6c207d61f0e24228f7e911ee0d922cdff0fbd73967d560,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694729440080950772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3182cd6404df57b460d7bd993044b348a732125988484cd184ad6f87f57251b,PodSandboxId:134152eff658f381d87b8d373919907e94199386eef178ea6d76653e825181b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694729427120849448,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816cc7b2db9f787c7265aba76d86c8117eae89d2ca46ab565b976be07395c652,PodSandboxId:711014c2d0ef16af5b2bdf0166945215d1a772da0f38564e6294b947c79e525d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694729424238398698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9f8b500eed7ea5ebe97293d8c6aff560b651aeb7790f558769d9d01be72c48,PodSandboxId:e2dab1c0ab110b1ed92c8ecbb43eb69f78f470a010eeec8cd99a72e14902dc36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694729415218916225,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42eb95cb73086989cb0e663d859f4be9aeff063edb818494536a6f2e54af981f,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694729408973401057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2e5e8dcb9235af730a1ece0f7a701918bc7f4af912d02df99ac721d0a4903d,PodSandboxId:208653ff848170d07f41c6228051bc069b39f2e9445f93dca14aac0000a85fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694729408856384927,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03037d228259d8cc49c35183b5b5a93ddf0238f604188a15d4805d8957e831d,PodSandboxId:780a8897ef9d508acfd69feba123ed863fc95fcbd4fe2aa4d3823137fe398fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694729403660476980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations
:map[string]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad447b4b13bd58dbab45c885ea935570c2e6dfc6d56af5ff71f250fed141fba,PodSandboxId:0b2fb8ef99a77ae91515d7515117e6558060bb13bea77a8f30ccc9d2a107ef85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694729403175246189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19e8d6787ee446a44e05a606bee863,},Annotations:map[string]
string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca46cbcfda644ba995fae700bb3eba9cf95ee6e3b91792d843c988a8a23b6ed2,PodSandboxId:7dfbf5ebe3a0cd6deb716bba70f55dd52f037b62edd5d89d9aea94bdc45cef36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694729403005587846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,}
,Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e874a033de52c19a7a576739d75fe900488fa3db7f2c7dec22f76377f66775a0,PodSandboxId:174eb38dd000af544a6339e18153ddade4c0516eb8e236adf1fb5f412b365b70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694729402795298937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations
:map[string]string{io.kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=851a132a-4884-4173-9224-7ece99baba08 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.163419294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4515c9d9-09e7-4cd7-8a14-ec8d3d7e870f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.163527582Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4515c9d9-09e7-4cd7-8a14-ec8d3d7e870f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.163995174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98c2c492d335fc0cfd6c207d61f0e24228f7e911ee0d922cdff0fbd73967d560,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694729440080950772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3182cd6404df57b460d7bd993044b348a732125988484cd184ad6f87f57251b,PodSandboxId:134152eff658f381d87b8d373919907e94199386eef178ea6d76653e825181b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694729427120849448,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816cc7b2db9f787c7265aba76d86c8117eae89d2ca46ab565b976be07395c652,PodSandboxId:711014c2d0ef16af5b2bdf0166945215d1a772da0f38564e6294b947c79e525d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694729424238398698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9f8b500eed7ea5ebe97293d8c6aff560b651aeb7790f558769d9d01be72c48,PodSandboxId:e2dab1c0ab110b1ed92c8ecbb43eb69f78f470a010eeec8cd99a72e14902dc36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694729415218916225,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42eb95cb73086989cb0e663d859f4be9aeff063edb818494536a6f2e54af981f,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694729408973401057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2e5e8dcb9235af730a1ece0f7a701918bc7f4af912d02df99ac721d0a4903d,PodSandboxId:208653ff848170d07f41c6228051bc069b39f2e9445f93dca14aac0000a85fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694729408856384927,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03037d228259d8cc49c35183b5b5a93ddf0238f604188a15d4805d8957e831d,PodSandboxId:780a8897ef9d508acfd69feba123ed863fc95fcbd4fe2aa4d3823137fe398fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694729403660476980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations
:map[string]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad447b4b13bd58dbab45c885ea935570c2e6dfc6d56af5ff71f250fed141fba,PodSandboxId:0b2fb8ef99a77ae91515d7515117e6558060bb13bea77a8f30ccc9d2a107ef85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694729403175246189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19e8d6787ee446a44e05a606bee863,},Annotations:map[string]
string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca46cbcfda644ba995fae700bb3eba9cf95ee6e3b91792d843c988a8a23b6ed2,PodSandboxId:7dfbf5ebe3a0cd6deb716bba70f55dd52f037b62edd5d89d9aea94bdc45cef36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694729403005587846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,}
,Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e874a033de52c19a7a576739d75fe900488fa3db7f2c7dec22f76377f66775a0,PodSandboxId:174eb38dd000af544a6339e18153ddade4c0516eb8e236adf1fb5f412b365b70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694729402795298937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations
:map[string]string{io.kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4515c9d9-09e7-4cd7-8a14-ec8d3d7e870f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.196382697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b064a6e7-e40b-469c-acbe-2f20302630d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.196471450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b064a6e7-e40b-469c-acbe-2f20302630d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.196716330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98c2c492d335fc0cfd6c207d61f0e24228f7e911ee0d922cdff0fbd73967d560,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694729440080950772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3182cd6404df57b460d7bd993044b348a732125988484cd184ad6f87f57251b,PodSandboxId:134152eff658f381d87b8d373919907e94199386eef178ea6d76653e825181b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694729427120849448,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816cc7b2db9f787c7265aba76d86c8117eae89d2ca46ab565b976be07395c652,PodSandboxId:711014c2d0ef16af5b2bdf0166945215d1a772da0f38564e6294b947c79e525d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694729424238398698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9f8b500eed7ea5ebe97293d8c6aff560b651aeb7790f558769d9d01be72c48,PodSandboxId:e2dab1c0ab110b1ed92c8ecbb43eb69f78f470a010eeec8cd99a72e14902dc36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694729415218916225,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42eb95cb73086989cb0e663d859f4be9aeff063edb818494536a6f2e54af981f,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694729408973401057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2e5e8dcb9235af730a1ece0f7a701918bc7f4af912d02df99ac721d0a4903d,PodSandboxId:208653ff848170d07f41c6228051bc069b39f2e9445f93dca14aac0000a85fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694729408856384927,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03037d228259d8cc49c35183b5b5a93ddf0238f604188a15d4805d8957e831d,PodSandboxId:780a8897ef9d508acfd69feba123ed863fc95fcbd4fe2aa4d3823137fe398fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694729403660476980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations
:map[string]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad447b4b13bd58dbab45c885ea935570c2e6dfc6d56af5ff71f250fed141fba,PodSandboxId:0b2fb8ef99a77ae91515d7515117e6558060bb13bea77a8f30ccc9d2a107ef85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694729403175246189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19e8d6787ee446a44e05a606bee863,},Annotations:map[string]
string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca46cbcfda644ba995fae700bb3eba9cf95ee6e3b91792d843c988a8a23b6ed2,PodSandboxId:7dfbf5ebe3a0cd6deb716bba70f55dd52f037b62edd5d89d9aea94bdc45cef36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694729403005587846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,}
,Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e874a033de52c19a7a576739d75fe900488fa3db7f2c7dec22f76377f66775a0,PodSandboxId:174eb38dd000af544a6339e18153ddade4c0516eb8e236adf1fb5f412b365b70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694729402795298937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations
:map[string]string{io.kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b064a6e7-e40b-469c-acbe-2f20302630d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.229933711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bcc811c2-2591-4c6d-86de-e917ad7f4731 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.230014840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bcc811c2-2591-4c6d-86de-e917ad7f4731 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.230265816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98c2c492d335fc0cfd6c207d61f0e24228f7e911ee0d922cdff0fbd73967d560,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694729440080950772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3182cd6404df57b460d7bd993044b348a732125988484cd184ad6f87f57251b,PodSandboxId:134152eff658f381d87b8d373919907e94199386eef178ea6d76653e825181b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694729427120849448,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816cc7b2db9f787c7265aba76d86c8117eae89d2ca46ab565b976be07395c652,PodSandboxId:711014c2d0ef16af5b2bdf0166945215d1a772da0f38564e6294b947c79e525d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694729424238398698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9f8b500eed7ea5ebe97293d8c6aff560b651aeb7790f558769d9d01be72c48,PodSandboxId:e2dab1c0ab110b1ed92c8ecbb43eb69f78f470a010eeec8cd99a72e14902dc36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694729415218916225,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42eb95cb73086989cb0e663d859f4be9aeff063edb818494536a6f2e54af981f,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694729408973401057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2e5e8dcb9235af730a1ece0f7a701918bc7f4af912d02df99ac721d0a4903d,PodSandboxId:208653ff848170d07f41c6228051bc069b39f2e9445f93dca14aac0000a85fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694729408856384927,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03037d228259d8cc49c35183b5b5a93ddf0238f604188a15d4805d8957e831d,PodSandboxId:780a8897ef9d508acfd69feba123ed863fc95fcbd4fe2aa4d3823137fe398fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694729403660476980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations
:map[string]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad447b4b13bd58dbab45c885ea935570c2e6dfc6d56af5ff71f250fed141fba,PodSandboxId:0b2fb8ef99a77ae91515d7515117e6558060bb13bea77a8f30ccc9d2a107ef85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694729403175246189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19e8d6787ee446a44e05a606bee863,},Annotations:map[string]
string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca46cbcfda644ba995fae700bb3eba9cf95ee6e3b91792d843c988a8a23b6ed2,PodSandboxId:7dfbf5ebe3a0cd6deb716bba70f55dd52f037b62edd5d89d9aea94bdc45cef36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694729403005587846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,}
,Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e874a033de52c19a7a576739d75fe900488fa3db7f2c7dec22f76377f66775a0,PodSandboxId:174eb38dd000af544a6339e18153ddade4c0516eb8e236adf1fb5f412b365b70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694729402795298937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations
:map[string]string{io.kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bcc811c2-2591-4c6d-86de-e917ad7f4731 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.262293035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cbd099fd-7209-4589-b711-37ca94d00c07 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.262376867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cbd099fd-7209-4589-b711-37ca94d00c07 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:13:49 multinode-124911 crio[705]: time="2023-09-14 22:13:49.262607458Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98c2c492d335fc0cfd6c207d61f0e24228f7e911ee0d922cdff0fbd73967d560,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694729440080950772,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3182cd6404df57b460d7bd993044b348a732125988484cd184ad6f87f57251b,PodSandboxId:134152eff658f381d87b8d373919907e94199386eef178ea6d76653e825181b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694729427120849448,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmkvp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 854464d1-c06e-45fe-a6c7-9c8b82f8b8f7,},Annotations:map[string]string{io.kubernetes.container.hash: 380dc55f,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816cc7b2db9f787c7265aba76d86c8117eae89d2ca46ab565b976be07395c652,PodSandboxId:711014c2d0ef16af5b2bdf0166945215d1a772da0f38564e6294b947c79e525d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694729424238398698,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ssj9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aadacae8-9f4d-4c24-91c7-78a88d187f73,},Annotations:map[string]string{io.kubernetes.container.hash: a20bef1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e9f8b500eed7ea5ebe97293d8c6aff560b651aeb7790f558769d9d01be72c48,PodSandboxId:e2dab1c0ab110b1ed92c8ecbb43eb69f78f470a010eeec8cd99a72e14902dc36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1694729415218916225,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-274xj,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6d12f7c0-2ad9-436f-ab5d-528c4823865c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c0518b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42eb95cb73086989cb0e663d859f4be9aeff063edb818494536a6f2e54af981f,PodSandboxId:c721110f1086a8a363d4ff3c7c42e74ca9bfedbfd4bf7bbc8602dbf5abf951b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694729408973401057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: aada9d30-e15d-4405-a7e2-e979dd4b8e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cc5e37f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2e5e8dcb9235af730a1ece0f7a701918bc7f4af912d02df99ac721d0a4903d,PodSandboxId:208653ff848170d07f41c6228051bc069b39f2e9445f93dca14aac0000a85fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694729408856384927,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kd4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: de9e2ee3-364a-447b-9d7f-be85d86838ae,},Annotations:map[string]string{io.kubernetes.container.hash: 26e993e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03037d228259d8cc49c35183b5b5a93ddf0238f604188a15d4805d8957e831d,PodSandboxId:780a8897ef9d508acfd69feba123ed863fc95fcbd4fe2aa4d3823137fe398fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694729403660476980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87beacc0664a01f1abb8543be732cb2e,},Annotations
:map[string]string{io.kubernetes.container.hash: a0b81492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad447b4b13bd58dbab45c885ea935570c2e6dfc6d56af5ff71f250fed141fba,PodSandboxId:0b2fb8ef99a77ae91515d7515117e6558060bb13bea77a8f30ccc9d2a107ef85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694729403175246189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c19e8d6787ee446a44e05a606bee863,},Annotations:map[string]
string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca46cbcfda644ba995fae700bb3eba9cf95ee6e3b91792d843c988a8a23b6ed2,PodSandboxId:7dfbf5ebe3a0cd6deb716bba70f55dd52f037b62edd5d89d9aea94bdc45cef36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694729403005587846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0364c35ea02d584f30ca0c3d8a47dfb6,}
,Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e874a033de52c19a7a576739d75fe900488fa3db7f2c7dec22f76377f66775a0,PodSandboxId:174eb38dd000af544a6339e18153ddade4c0516eb8e236adf1fb5f412b365b70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694729402795298937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-124911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45ad3e9dc71d2c9a455002dbdc235854,},Annotations
:map[string]string{io.kubernetes.container.hash: 7beb6efa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cbd099fd-7209-4589-b711-37ca94d00c07 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	98c2c492d335f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   c721110f1086a
	b3182cd6404df       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   134152eff658f
	816cc7b2db9f7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   711014c2d0ef1
	0e9f8b500eed7       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052    3 minutes ago       Running             kindnet-cni               1                   e2dab1c0ab110
	42eb95cb73086       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   c721110f1086a
	db2e5e8dcb923       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      3 minutes ago       Running             kube-proxy                1                   208653ff84817
	f03037d228259       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   780a8897ef9d5
	aad447b4b13bd       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      3 minutes ago       Running             kube-scheduler            1                   0b2fb8ef99a77
	ca46cbcfda644       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      3 minutes ago       Running             kube-controller-manager   1                   7dfbf5ebe3a0c
	e874a033de52c       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      3 minutes ago       Running             kube-apiserver            1                   174eb38dd000a
	
	* 
	* ==> coredns [816cc7b2db9f787c7265aba76d86c8117eae89d2ca46ab565b976be07395c652] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50176 - 53035 "HINFO IN 4442513080231645564.5483201733234398905. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010944628s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-124911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-124911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=multinode-124911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T21_59_21_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 21:59:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-124911
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:13:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:10:38 +0000   Thu, 14 Sep 2023 21:59:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:10:38 +0000   Thu, 14 Sep 2023 21:59:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:10:38 +0000   Thu, 14 Sep 2023 21:59:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:10:38 +0000   Thu, 14 Sep 2023 22:10:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    multinode-124911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 429a16e07a544f27b6b8d5f36ed8ec0a
	  System UUID:                429a16e0-7a54-4f27-b6b8-d5f36ed8ec0a
	  Boot ID:                    ac9039a2-e281-47dc-a93f-affdc4d8180c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pmkvp                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-ssj9q                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-124911                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-274xj                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-124911             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-124911    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-2kd4p                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-124911             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-124911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-124911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-124911 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                    node-controller  Node multinode-124911 event: Registered Node multinode-124911 in Controller
	  Normal  NodeReady                14m                    kubelet          Node multinode-124911 status is now: NodeReady
	  Normal  Starting                 3m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s (x8 over 3m48s)  kubelet          Node multinode-124911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x8 over 3m48s)  kubelet          Node multinode-124911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x7 over 3m48s)  kubelet          Node multinode-124911 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m29s                  node-controller  Node multinode-124911 event: Registered Node multinode-124911 in Controller
	
	
	Name:               multinode-124911-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-124911-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:12:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-124911-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:13:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:12:04 +0000   Thu, 14 Sep 2023 22:12:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:12:04 +0000   Thu, 14 Sep 2023 22:12:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:12:04 +0000   Thu, 14 Sep 2023 22:12:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:12:04 +0000   Thu, 14 Sep 2023 22:12:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.254
	  Hostname:    multinode-124911-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3aefae14bf79416aa65fd41eb4fa5db6
	  System UUID:                3aefae14-bf79-416a-a65f-d41eb4fa5db6
	  Boot ID:                    bd68551a-620c-42fc-a7e9-e2ffd3e3bb0e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-qxwvm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-mmwd5               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-c4qjg            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From        Message
	  ----     ------                   ----                 ----        -------
	  Normal   Starting                 13m                  kube-proxy  
	  Normal   Starting                 103s                 kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)    kubelet     Node multinode-124911-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)    kubelet     Node multinode-124911-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)    kubelet     Node multinode-124911-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                  kubelet     Node multinode-124911-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m44s                kubelet     Node multinode-124911-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m9s (x2 over 3m9s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 105s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x2 over 105s)  kubelet     Node multinode-124911-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x2 over 105s)  kubelet     Node multinode-124911-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x2 over 105s)  kubelet     Node multinode-124911-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                105s                 kubelet     Node multinode-124911-m02 status is now: NodeReady
	
	
	Name:               multinode-124911-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-124911-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:13:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-124911-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:13:44 +0000   Thu, 14 Sep 2023 22:13:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:13:44 +0000   Thu, 14 Sep 2023 22:13:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:13:44 +0000   Thu, 14 Sep 2023 22:13:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:13:44 +0000   Thu, 14 Sep 2023 22:13:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    multinode-124911-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 664cc480404e4b4dac8a0d77491863f0
	  System UUID:                664cc480-404e-4b4d-ac8a-0d77491863f0
	  Boot ID:                    4cc52305-bad9-4514-8c0c-9531149727ac
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-c9cz8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-vjv8m               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-5tcff            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-124911-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-124911-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-124911-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-124911-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-124911-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-124911-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-124911-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-124911-m03 status is now: NodeReady
	  Normal   NodeNotReady             66s                kubelet     Node multinode-124911-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        37s (x2 over 97s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-124911-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-124911-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-124911-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-124911-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Sep14 22:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.065881] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.220371] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.681279] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.121022] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.454339] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.081195] systemd-fstab-generator[632]: Ignoring "noauto" for root device
	[  +0.112081] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.153807] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.114350] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.201703] systemd-fstab-generator[691]: Ignoring "noauto" for root device
	[Sep14 22:10] systemd-fstab-generator[904]: Ignoring "noauto" for root device
	[ +14.689448] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [f03037d228259d8cc49c35183b5b5a93ddf0238f604188a15d4805d8957e831d] <==
	* {"level":"info","ts":"2023-09-14T22:10:05.071072Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:10:05.07108Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:10:05.071264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb switched to configuration voters=(10028790062790684635)"}
	{"level":"info","ts":"2023-09-14T22:10:05.071304Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d52e949b9fea4da5","local-member-id":"8b2d6b6d639b2fdb","added-peer-id":"8b2d6b6d639b2fdb","added-peer-peer-urls":["https://192.168.39.116:2380"]}
	{"level":"info","ts":"2023-09-14T22:10:05.071368Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d52e949b9fea4da5","local-member-id":"8b2d6b6d639b2fdb","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:10:05.071392Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:10:05.073541Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-14T22:10:05.073703Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8b2d6b6d639b2fdb","initial-advertise-peer-urls":["https://192.168.39.116:2380"],"listen-peer-urls":["https://192.168.39.116:2380"],"advertise-client-urls":["https://192.168.39.116:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.116:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T22:10:05.073773Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T22:10:05.07386Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.116:2380"}
	{"level":"info","ts":"2023-09-14T22:10:05.073889Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.116:2380"}
	{"level":"info","ts":"2023-09-14T22:10:06.245778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-14T22:10:06.24584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-14T22:10:06.245857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb received MsgPreVoteResp from 8b2d6b6d639b2fdb at term 2"}
	{"level":"info","ts":"2023-09-14T22:10:06.245869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb became candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:10:06.245874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb received MsgVoteResp from 8b2d6b6d639b2fdb at term 3"}
	{"level":"info","ts":"2023-09-14T22:10:06.245882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b2d6b6d639b2fdb became leader at term 3"}
	{"level":"info","ts":"2023-09-14T22:10:06.245889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8b2d6b6d639b2fdb elected leader 8b2d6b6d639b2fdb at term 3"}
	{"level":"info","ts":"2023-09-14T22:10:06.250159Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:10:06.25019Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8b2d6b6d639b2fdb","local-member-attributes":"{Name:multinode-124911 ClientURLs:[https://192.168.39.116:2379]}","request-path":"/0/members/8b2d6b6d639b2fdb/attributes","cluster-id":"d52e949b9fea4da5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:10:06.250503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:10:06.25202Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.116:2379"}
	{"level":"info","ts":"2023-09-14T22:10:06.252157Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:10:06.252385Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:10:06.25242Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  22:13:49 up 4 min,  0 users,  load average: 0.04, 0.16, 0.08
	Linux multinode-124911 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [0e9f8b500eed7ea5ebe97293d8c6aff560b651aeb7790f558769d9d01be72c48] <==
	* I0914 22:13:16.525398       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:13:16.525493       1 main.go:227] handling current node
	I0914 22:13:16.525516       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0914 22:13:16.525533       1 main.go:250] Node multinode-124911-m02 has CIDR [10.244.1.0/24] 
	I0914 22:13:16.525647       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0914 22:13:16.525670       1 main.go:250] Node multinode-124911-m03 has CIDR [10.244.3.0/24] 
	I0914 22:13:26.538835       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:13:26.538873       1 main.go:227] handling current node
	I0914 22:13:26.538884       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0914 22:13:26.538890       1 main.go:250] Node multinode-124911-m02 has CIDR [10.244.1.0/24] 
	I0914 22:13:26.539127       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0914 22:13:26.539160       1 main.go:250] Node multinode-124911-m03 has CIDR [10.244.3.0/24] 
	I0914 22:13:36.552071       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:13:36.552114       1 main.go:227] handling current node
	I0914 22:13:36.552125       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0914 22:13:36.552131       1 main.go:250] Node multinode-124911-m02 has CIDR [10.244.1.0/24] 
	I0914 22:13:36.552236       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0914 22:13:36.552241       1 main.go:250] Node multinode-124911-m03 has CIDR [10.244.3.0/24] 
	I0914 22:13:46.557159       1 main.go:223] Handling node with IPs: map[192.168.39.116:{}]
	I0914 22:13:46.557217       1 main.go:227] handling current node
	I0914 22:13:46.557228       1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
	I0914 22:13:46.557234       1 main.go:250] Node multinode-124911-m02 has CIDR [10.244.1.0/24] 
	I0914 22:13:46.557428       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0914 22:13:46.557457       1 main.go:250] Node multinode-124911-m03 has CIDR [10.244.2.0/24] 
	I0914 22:13:46.557526       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.207 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [e874a033de52c19a7a576739d75fe900488fa3db7f2c7dec22f76377f66775a0] <==
	* I0914 22:10:07.567351       1 handler_discovery.go:404] Starting ResourceDiscoveryManager
	I0914 22:10:07.567444       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 22:10:07.567578       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 22:10:07.704712       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 22:10:07.708498       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0914 22:10:07.740553       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 22:10:07.740932       1 aggregator.go:166] initial CRD sync complete...
	I0914 22:10:07.740989       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 22:10:07.741020       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 22:10:07.741050       1 cache.go:39] Caches are synced for autoregister controller
	I0914 22:10:07.745498       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 22:10:07.746437       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0914 22:10:07.746474       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 22:10:07.747867       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 22:10:07.748580       1 shared_informer.go:318] Caches are synced for configmaps
	I0914 22:10:07.750579       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0914 22:10:07.763588       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0914 22:10:08.540053       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 22:10:10.146707       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 22:10:10.287463       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 22:10:10.297918       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 22:10:10.366348       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 22:10:10.373162       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 22:10:20.520943       1 controller.go:624] quota admission added evaluator for: endpoints
	I0914 22:10:20.589633       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [ca46cbcfda644ba995fae700bb3eba9cf95ee6e3b91792d843c988a8a23b6ed2] <==
	* I0914 22:12:04.408146       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-124911-m03"
	I0914 22:12:04.411061       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-lv55w" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-lv55w"
	I0914 22:12:04.424052       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-124911-m02" podCIDRs=["10.244.1.0/24"]
	I0914 22:12:04.561061       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-124911-m02"
	I0914 22:12:04.661069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.733366ms"
	I0914 22:12:04.661461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.8µs"
	I0914 22:12:05.331123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="197.466µs"
	I0914 22:12:18.589934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="197.34µs"
	I0914 22:12:19.156467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="92.755µs"
	I0914 22:12:19.163241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.979µs"
	I0914 22:12:43.136992       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-124911-m02"
	I0914 22:13:40.800455       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-qxwvm"
	I0914 22:13:40.819380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="45.841399ms"
	I0914 22:13:40.837872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.33438ms"
	I0914 22:13:40.853042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.099715ms"
	I0914 22:13:40.853230       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.027µs"
	I0914 22:13:42.431707       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.87098ms"
	I0914 22:13:42.431858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.311µs"
	I0914 22:13:43.806443       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-124911-m02"
	I0914 22:13:44.433000       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-124911-m03\" does not exist"
	I0914 22:13:44.433394       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-124911-m02"
	I0914 22:13:44.435551       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-c9cz8" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-c9cz8"
	I0914 22:13:44.458376       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-124911-m03" podCIDRs=["10.244.2.0/24"]
	I0914 22:13:44.779323       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-124911-m03"
	I0914 22:13:45.334234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="74.492µs"
	
	* 
	* ==> kube-proxy [db2e5e8dcb9235af730a1ece0f7a701918bc7f4af912d02df99ac721d0a4903d] <==
	* I0914 22:10:09.393809       1 server_others.go:69] "Using iptables proxy"
	I0914 22:10:09.405194       1 node.go:141] Successfully retrieved node IP: 192.168.39.116
	I0914 22:10:09.487988       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:10:09.488029       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:10:09.490296       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:10:09.490324       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:10:09.490448       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:10:09.490460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:10:09.496028       1 config.go:188] "Starting service config controller"
	I0914 22:10:09.496047       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:10:09.496064       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:10:09.496067       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:10:09.496502       1 config.go:315] "Starting node config controller"
	I0914 22:10:09.496508       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:10:09.598921       1 shared_informer.go:318] Caches are synced for node config
	I0914 22:10:09.599074       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:10:09.599164       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [aad447b4b13bd58dbab45c885ea935570c2e6dfc6d56af5ff71f250fed141fba] <==
	* I0914 22:10:05.379488       1 serving.go:348] Generated self-signed cert in-memory
	W0914 22:10:07.662016       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 22:10:07.662125       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:10:07.662138       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:10:07.662145       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:10:07.714773       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:10:07.714852       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:10:07.716871       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:10:07.716974       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:10:07.717283       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:10:07.717501       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:10:07.818505       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:09:35 UTC, ends at Thu 2023-09-14 22:13:49 UTC. --
	Sep 14 22:10:11 multinode-124911 kubelet[910]: E0914 22:10:11.887788     910 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Sep 14 22:10:11 multinode-124911 kubelet[910]: E0914 22:10:11.893958     910 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-ssj9q" podUID="aadacae8-9f4d-4c24-91c7-78a88d187f73"
	Sep 14 22:10:11 multinode-124911 kubelet[910]: E0914 22:10:11.894287     910 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-pmkvp" podUID="854464d1-c06e-45fe-a6c7-9c8b82f8b8f7"
	Sep 14 22:10:13 multinode-124911 kubelet[910]: E0914 22:10:13.892035     910 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-ssj9q" podUID="aadacae8-9f4d-4c24-91c7-78a88d187f73"
	Sep 14 22:10:13 multinode-124911 kubelet[910]: E0914 22:10:13.892585     910 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-pmkvp" podUID="854464d1-c06e-45fe-a6c7-9c8b82f8b8f7"
	Sep 14 22:10:15 multinode-124911 kubelet[910]: E0914 22:10:15.462241     910 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 22:10:15 multinode-124911 kubelet[910]: E0914 22:10:15.462383     910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aadacae8-9f4d-4c24-91c7-78a88d187f73-config-volume podName:aadacae8-9f4d-4c24-91c7-78a88d187f73 nodeName:}" failed. No retries permitted until 2023-09-14 22:10:23.462356264 +0000 UTC m=+21.840738094 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aadacae8-9f4d-4c24-91c7-78a88d187f73-config-volume") pod "coredns-5dd5756b68-ssj9q" (UID: "aadacae8-9f4d-4c24-91c7-78a88d187f73") : object "kube-system"/"coredns" not registered
	Sep 14 22:10:15 multinode-124911 kubelet[910]: E0914 22:10:15.562929     910 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 14 22:10:15 multinode-124911 kubelet[910]: E0914 22:10:15.562971     910 projected.go:198] Error preparing data for projected volume kube-api-access-7v8hg for pod default/busybox-5bc68d56bd-pmkvp: object "default"/"kube-root-ca.crt" not registered
	Sep 14 22:10:15 multinode-124911 kubelet[910]: E0914 22:10:15.563017     910 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/854464d1-c06e-45fe-a6c7-9c8b82f8b8f7-kube-api-access-7v8hg podName:854464d1-c06e-45fe-a6c7-9c8b82f8b8f7 nodeName:}" failed. No retries permitted until 2023-09-14 22:10:23.563004741 +0000 UTC m=+21.941386562 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7v8hg" (UniqueName: "kubernetes.io/projected/854464d1-c06e-45fe-a6c7-9c8b82f8b8f7-kube-api-access-7v8hg") pod "busybox-5bc68d56bd-pmkvp" (UID: "854464d1-c06e-45fe-a6c7-9c8b82f8b8f7") : object "default"/"kube-root-ca.crt" not registered
	Sep 14 22:10:15 multinode-124911 kubelet[910]: E0914 22:10:15.892913     910 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-ssj9q" podUID="aadacae8-9f4d-4c24-91c7-78a88d187f73"
	Sep 14 22:10:15 multinode-124911 kubelet[910]: E0914 22:10:15.893328     910 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-pmkvp" podUID="854464d1-c06e-45fe-a6c7-9c8b82f8b8f7"
	Sep 14 22:10:40 multinode-124911 kubelet[910]: I0914 22:10:40.056204     910 scope.go:117] "RemoveContainer" containerID="42eb95cb73086989cb0e663d859f4be9aeff063edb818494536a6f2e54af981f"
	Sep 14 22:11:01 multinode-124911 kubelet[910]: E0914 22:11:01.907910     910 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:11:01 multinode-124911 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:11:01 multinode-124911 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:11:01 multinode-124911 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:12:01 multinode-124911 kubelet[910]: E0914 22:12:01.908624     910 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:12:01 multinode-124911 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:12:01 multinode-124911 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:12:01 multinode-124911 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:13:01 multinode-124911 kubelet[910]: E0914 22:13:01.910466     910 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:13:01 multinode-124911 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:13:01 multinode-124911 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:13:01 multinode-124911 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-124911 -n multinode-124911
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-124911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (685.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 stop
E0914 22:14:29.765183   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-124911 stop: exit status 82 (2m1.082103198s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-124911"  ...
	* Stopping node "multinode-124911"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-124911 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-124911 status: exit status 3 (18.673354726s)

                                                
                                                
-- stdout --
	multinode-124911
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-124911-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:16:12.155802   31500 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.116:22: connect: no route to host
	E0914 22:16:12.155839   31500 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.116:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-124911 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-124911 -n multinode-124911
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-124911 -n multinode-124911: exit status 3 (3.170628476s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:16:15.483822   31602 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.116:22: connect: no route to host
	E0914 22:16:15.483846   31602 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-124911" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.93s)

                                                
                                    
x
+
TestPreload (299.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-284193 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0914 22:24:29.764369   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:26:35.238058   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 22:26:36.474748   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-284193 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m35.750728076s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-284193 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-284193 image pull gcr.io/k8s-minikube/busybox: (2.722714355s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-284193
E0914 22:28:32.188347   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-284193: exit status 82 (2m1.628077129s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-284193"  ...
	* Stopping node "test-preload-284193"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-284193 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2023-09-14 22:29:07.506149922 +0000 UTC m=+3169.700491549
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-284193 -n test-preload-284193
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-284193 -n test-preload-284193: exit status 3 (18.508147331s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:29:26.011771   35176 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.25:22: connect: no route to host
	E0914 22:29:26.011795   35176 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.25:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-284193" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-284193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-284193
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-284193: (1.07035269s)
--- FAIL: TestPreload (299.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (171.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1960495081.exe start -p running-upgrade-995756 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0914 22:31:36.476491   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1960495081.exe start -p running-upgrade-995756 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m14.370166654s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-995756 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-995756 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (33.012083868s)

                                                
                                                
-- stdout --
	* [running-upgrade-995756] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-995756 in cluster running-upgrade-995756
	* Updating the running kvm2 "running-upgrade-995756" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:33:39.358262   37907 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:33:39.358367   37907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:33:39.358373   37907 out.go:309] Setting ErrFile to fd 2...
	I0914 22:33:39.358380   37907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:33:39.358569   37907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:33:39.359116   37907 out.go:303] Setting JSON to false
	I0914 22:33:39.360084   37907 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4562,"bootTime":1694726258,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:33:39.360141   37907 start.go:138] virtualization: kvm guest
	I0914 22:33:39.362285   37907 out.go:177] * [running-upgrade-995756] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:33:39.363681   37907 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:33:39.363742   37907 notify.go:220] Checking for updates...
	I0914 22:33:39.365332   37907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:33:39.367268   37907 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:33:39.368733   37907 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:33:39.370191   37907 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:33:39.371648   37907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:33:39.373225   37907 config.go:182] Loaded profile config "running-upgrade-995756": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0914 22:33:39.373239   37907 start_flags.go:686] config upgrade: Driver=kvm2
	I0914 22:33:39.373250   37907 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503
	I0914 22:33:39.373328   37907 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/running-upgrade-995756/config.json ...
	I0914 22:33:39.373884   37907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:33:39.373927   37907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:33:39.389781   37907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I0914 22:33:39.390149   37907 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:33:39.390667   37907 main.go:141] libmachine: Using API Version  1
	I0914 22:33:39.390691   37907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:33:39.391019   37907 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:33:39.391237   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .DriverName
	I0914 22:33:39.393224   37907 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 22:33:39.394534   37907 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:33:39.394794   37907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:33:39.394828   37907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:33:39.408894   37907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45361
	I0914 22:33:39.409239   37907 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:33:39.409611   37907 main.go:141] libmachine: Using API Version  1
	I0914 22:33:39.409635   37907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:33:39.409948   37907 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:33:39.410117   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .DriverName
	I0914 22:33:39.443519   37907 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:33:39.444870   37907 start.go:298] selected driver: kvm2
	I0914 22:33:39.444892   37907 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-995756 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.117 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0914 22:33:39.445003   37907 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:33:39.445623   37907 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:39.445709   37907 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:33:39.459398   37907 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:33:39.459760   37907 cni.go:84] Creating CNI manager for ""
	I0914 22:33:39.459782   37907 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0914 22:33:39.459802   37907 start_flags.go:321] config:
	{Name:running-upgrade-995756 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.117 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0914 22:33:39.459958   37907 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:39.461606   37907 out.go:177] * Starting control plane node running-upgrade-995756 in cluster running-upgrade-995756
	I0914 22:33:39.462940   37907 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0914 22:33:39.867227   37907 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0914 22:33:39.867395   37907 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/running-upgrade-995756/config.json ...
	I0914 22:33:39.867561   37907 cache.go:107] acquiring lock: {Name:mkff58d72010a5253f2aeec8a75178e46da26ceb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:39.867564   37907 cache.go:107] acquiring lock: {Name:mka6f0542a3a53240d4e6146669b2ad365734286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:39.867601   37907 cache.go:107] acquiring lock: {Name:mk462c0360be954394d7742924c19fe7c63b7d00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:39.867652   37907 cache.go:115] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 22:33:39.867676   37907 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.921µs
	I0914 22:33:39.867674   37907 cache.go:107] acquiring lock: {Name:mk7f016dc56396fc5cc2f1923f09058d0d2f3809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:39.867679   37907 cache.go:107] acquiring lock: {Name:mk0047fd90520620e6fb8bf8a3cb9d27794b0683 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:39.867715   37907 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0914 22:33:39.867742   37907 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0914 22:33:39.867730   37907 cache.go:107] acquiring lock: {Name:mka08b933c610fdad9569d9776f61326fe3da113 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:39.867765   37907 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0914 22:33:39.867700   37907 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 22:33:39.867647   37907 cache.go:107] acquiring lock: {Name:mk493db77490a7ce3badfede780ef64553499771 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:39.867830   37907 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0914 22:33:39.867796   37907 cache.go:107] acquiring lock: {Name:mka1aa152ae6383e50a98552651ec8f0af2d5a8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:39.867891   37907 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0914 22:33:39.867917   37907 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0914 22:33:39.868007   37907 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0914 22:33:39.868830   37907 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0914 22:33:39.868838   37907 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0914 22:33:39.868830   37907 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0914 22:33:39.868887   37907 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0914 22:33:39.868916   37907 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0914 22:33:39.868923   37907 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0914 22:33:39.868979   37907 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0914 22:33:39.909270   37907 start.go:365] acquiring machines lock for running-upgrade-995756: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:33:40.090236   37907 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0914 22:33:40.118772   37907 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0914 22:33:40.146733   37907 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0914 22:33:40.160932   37907 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0914 22:33:40.162459   37907 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0914 22:33:40.172362   37907 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0914 22:33:40.217492   37907 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0914 22:33:40.255762   37907 cache.go:157] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0914 22:33:40.255785   37907 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 388.158581ms
	I0914 22:33:40.255796   37907 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0914 22:33:40.823121   37907 cache.go:157] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0914 22:33:40.823146   37907 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 955.414808ms
	I0914 22:33:40.823157   37907 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0914 22:33:41.121486   37907 cache.go:157] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0914 22:33:41.121517   37907 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.253862195s
	I0914 22:33:41.121532   37907 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0914 22:33:41.433225   37907 cache.go:157] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0914 22:33:41.433262   37907 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.56570691s
	I0914 22:33:41.433278   37907 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0914 22:33:41.591166   37907 cache.go:157] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0914 22:33:41.591197   37907 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.723605904s
	I0914 22:33:41.591209   37907 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0914 22:33:42.237402   37907 cache.go:157] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0914 22:33:42.237423   37907 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.36975062s
	I0914 22:33:42.237434   37907 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0914 22:33:42.244774   37907 cache.go:157] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0914 22:33:42.244799   37907 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.377093184s
	I0914 22:33:42.244813   37907 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0914 22:33:42.244834   37907 cache.go:87] Successfully saved all images to host disk.
	I0914 22:34:09.008758   37907 start.go:369] acquired machines lock for "running-upgrade-995756" in 29.099448853s
	I0914 22:34:09.008802   37907 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:34:09.008811   37907 fix.go:54] fixHost starting: minikube
	I0914 22:34:09.009153   37907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:34:09.009193   37907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:34:09.025574   37907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I0914 22:34:09.025971   37907 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:34:09.026421   37907 main.go:141] libmachine: Using API Version  1
	I0914 22:34:09.026445   37907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:34:09.026799   37907 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:34:09.027012   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .DriverName
	I0914 22:34:09.027195   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetState
	I0914 22:34:09.028719   37907 fix.go:102] recreateIfNeeded on running-upgrade-995756: state=Running err=<nil>
	W0914 22:34:09.028741   37907 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:34:09.030527   37907 out.go:177] * Updating the running kvm2 "running-upgrade-995756" VM ...
	I0914 22:34:09.032026   37907 machine.go:88] provisioning docker machine ...
	I0914 22:34:09.032053   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .DriverName
	I0914 22:34:09.032278   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetMachineName
	I0914 22:34:09.032420   37907 buildroot.go:166] provisioning hostname "running-upgrade-995756"
	I0914 22:34:09.032460   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetMachineName
	I0914 22:34:09.032623   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHHostname
	I0914 22:34:09.035400   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.036006   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:09.036041   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.036170   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHPort
	I0914 22:34:09.036351   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:09.036621   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:09.036781   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHUsername
	I0914 22:34:09.036992   37907 main.go:141] libmachine: Using SSH client type: native
	I0914 22:34:09.037504   37907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0914 22:34:09.037530   37907 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-995756 && echo "running-upgrade-995756" | sudo tee /etc/hostname
	I0914 22:34:09.153597   37907 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-995756
	
	I0914 22:34:09.153640   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHHostname
	I0914 22:34:09.156099   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.156470   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:09.156503   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.156645   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHPort
	I0914 22:34:09.156847   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:09.157041   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:09.157204   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHUsername
	I0914 22:34:09.157379   37907 main.go:141] libmachine: Using SSH client type: native
	I0914 22:34:09.157852   37907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0914 22:34:09.157886   37907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-995756' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-995756/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-995756' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:34:09.267593   37907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:34:09.267621   37907 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:34:09.267661   37907 buildroot.go:174] setting up certificates
	I0914 22:34:09.267670   37907 provision.go:83] configureAuth start
	I0914 22:34:09.267682   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetMachineName
	I0914 22:34:09.267958   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetIP
	I0914 22:34:09.270777   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.271130   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:09.271158   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.271388   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHHostname
	I0914 22:34:09.273502   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.273827   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:09.273858   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.273966   37907 provision.go:138] copyHostCerts
	I0914 22:34:09.274022   37907 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:34:09.274033   37907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:34:09.274094   37907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:34:09.274184   37907 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:34:09.274193   37907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:34:09.274217   37907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:34:09.274272   37907 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:34:09.274280   37907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:34:09.274298   37907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:34:09.274346   37907 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-995756 san=[192.168.50.117 192.168.50.117 localhost 127.0.0.1 minikube running-upgrade-995756]
	I0914 22:34:09.502116   37907 provision.go:172] copyRemoteCerts
	I0914 22:34:09.502165   37907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:34:09.502191   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHHostname
	I0914 22:34:09.505031   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.505402   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:09.505441   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.505634   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHPort
	I0914 22:34:09.505819   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:09.505950   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHUsername
	I0914 22:34:09.506119   37907 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/running-upgrade-995756/id_rsa Username:docker}
	I0914 22:34:09.589531   37907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:34:09.603300   37907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 22:34:09.624316   37907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:34:09.642620   37907 provision.go:86] duration metric: configureAuth took 374.936799ms
	I0914 22:34:09.642640   37907 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:34:09.642823   37907 config.go:182] Loaded profile config "running-upgrade-995756": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0914 22:34:09.642939   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHHostname
	I0914 22:34:09.645779   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.646213   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:09.646267   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:09.646509   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHPort
	I0914 22:34:09.646719   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:09.646907   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:09.647089   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHUsername
	I0914 22:34:09.647258   37907 main.go:141] libmachine: Using SSH client type: native
	I0914 22:34:09.647667   37907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0914 22:34:09.647687   37907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:34:10.184128   37907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:34:10.184165   37907 machine.go:91] provisioned docker machine in 1.152124502s
	I0914 22:34:10.184179   37907 start.go:300] post-start starting for "running-upgrade-995756" (driver="kvm2")
	I0914 22:34:10.184194   37907 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:34:10.184224   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .DriverName
	I0914 22:34:10.184543   37907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:34:10.184567   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHHostname
	I0914 22:34:10.187742   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:10.188156   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:10.188189   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:10.188344   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHPort
	I0914 22:34:10.188550   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:10.188746   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHUsername
	I0914 22:34:10.188879   37907 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/running-upgrade-995756/id_rsa Username:docker}
	I0914 22:34:10.278298   37907 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:34:10.282605   37907 info.go:137] Remote host: Buildroot 2019.02.7
	I0914 22:34:10.282625   37907 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:34:10.282683   37907 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:34:10.282767   37907 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:34:10.282859   37907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:34:10.288296   37907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:34:10.302593   37907 start.go:303] post-start completed in 118.399447ms
	I0914 22:34:10.302613   37907 fix.go:56] fixHost completed within 1.293803332s
	I0914 22:34:10.302632   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHHostname
	I0914 22:34:10.305237   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:10.305664   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:10.305699   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:10.305870   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHPort
	I0914 22:34:10.306057   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:10.306206   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:10.306385   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHUsername
	I0914 22:34:10.306596   37907 main.go:141] libmachine: Using SSH client type: native
	I0914 22:34:10.307047   37907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0914 22:34:10.307064   37907 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 22:34:10.423838   37907 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694730850.418582785
	
	I0914 22:34:10.423862   37907 fix.go:206] guest clock: 1694730850.418582785
	I0914 22:34:10.423871   37907 fix.go:219] Guest: 2023-09-14 22:34:10.418582785 +0000 UTC Remote: 2023-09-14 22:34:10.302616528 +0000 UTC m=+30.976356593 (delta=115.966257ms)
	I0914 22:34:10.423894   37907 fix.go:190] guest clock delta is within tolerance: 115.966257ms
	I0914 22:34:10.423901   37907 start.go:83] releasing machines lock for "running-upgrade-995756", held for 1.415117632s
	I0914 22:34:10.423929   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .DriverName
	I0914 22:34:10.424218   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetIP
	I0914 22:34:10.427375   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:10.427812   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:10.427857   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:10.428028   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .DriverName
	I0914 22:34:10.428515   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .DriverName
	I0914 22:34:10.428665   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .DriverName
	I0914 22:34:10.428729   37907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:34:10.428770   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHHostname
	I0914 22:34:10.428889   37907 ssh_runner.go:195] Run: cat /version.json
	I0914 22:34:10.428914   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHHostname
	I0914 22:34:10.431556   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:10.431710   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:10.431959   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:10.432013   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:10.432226   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:f3", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-14 23:31:57 +0000 UTC Type:0 Mac:52:54:00:2b:2e:f3 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-995756 Clientid:01:52:54:00:2b:2e:f3}
	I0914 22:34:10.432244   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHPort
	I0914 22:34:10.432258   37907 main.go:141] libmachine: (running-upgrade-995756) DBG | domain running-upgrade-995756 has defined IP address 192.168.50.117 and MAC address 52:54:00:2b:2e:f3 in network minikube-net
	I0914 22:34:10.432414   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:10.432497   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHPort
	I0914 22:34:10.432603   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHUsername
	I0914 22:34:10.432780   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHKeyPath
	I0914 22:34:10.432784   37907 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/running-upgrade-995756/id_rsa Username:docker}
	I0914 22:34:10.432915   37907 main.go:141] libmachine: (running-upgrade-995756) Calling .GetSSHUsername
	I0914 22:34:10.433091   37907 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/running-upgrade-995756/id_rsa Username:docker}
	W0914 22:34:10.521775   37907 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 22:34:10.521877   37907 ssh_runner.go:195] Run: systemctl --version
	I0914 22:34:10.557479   37907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:34:10.662229   37907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:34:10.669726   37907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:34:10.669780   37907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:34:10.676653   37907 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 22:34:10.676680   37907 start.go:469] detecting cgroup driver to use...
	I0914 22:34:10.676739   37907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:34:10.690632   37907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:34:10.704995   37907 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:34:10.705056   37907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:34:10.716672   37907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:34:10.727526   37907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0914 22:34:10.739802   37907 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0914 22:34:10.739867   37907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:34:10.902980   37907 docker.go:212] disabling docker service ...
	I0914 22:34:10.903044   37907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:34:11.925741   37907 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.022669766s)
	I0914 22:34:11.925813   37907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:34:11.945475   37907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:34:12.105100   37907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:34:12.284489   37907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:34:12.295788   37907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:34:12.312303   37907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0914 22:34:12.312377   37907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:34:12.325345   37907 out.go:177] 
	W0914 22:34:12.326975   37907 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0914 22:34:12.326999   37907 out.go:239] * 
	* 
	W0914 22:34:12.328153   37907 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 22:34:12.329589   37907 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-995756 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-14 22:34:12.346733317 +0000 UTC m=+3474.541074959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-995756 -n running-upgrade-995756
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-995756 -n running-upgrade-995756: exit status 4 (248.957043ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:34:12.564869   38745 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-995756" does not appear in /home/jenkins/minikube-integration/17243-6287/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-995756" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-995756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-995756
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-995756: (1.409818605s)
--- FAIL: TestRunningBinaryUpgrade (171.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (54.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-354420 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-354420 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.360301268s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-354420] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-354420 in cluster pause-354420
	* Updating the running kvm2 "pause-354420" VM ...
	* Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-354420" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:33:13.054283   37436 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:33:13.054389   37436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:33:13.054400   37436 out.go:309] Setting ErrFile to fd 2...
	I0914 22:33:13.054406   37436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:33:13.054646   37436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:33:13.055318   37436 out.go:303] Setting JSON to false
	I0914 22:33:13.056477   37436 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4535,"bootTime":1694726258,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:33:13.056560   37436 start.go:138] virtualization: kvm guest
	I0914 22:33:13.059043   37436 out.go:177] * [pause-354420] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:33:13.060618   37436 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:33:13.060617   37436 notify.go:220] Checking for updates...
	I0914 22:33:13.062198   37436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:33:13.063863   37436 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:33:13.065420   37436 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:33:13.066784   37436 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:33:13.068339   37436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:33:13.070223   37436 config.go:182] Loaded profile config "pause-354420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:33:13.070773   37436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:33:13.070822   37436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:33:13.087875   37436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34051
	I0914 22:33:13.088273   37436 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:33:13.088879   37436 main.go:141] libmachine: Using API Version  1
	I0914 22:33:13.088902   37436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:33:13.089291   37436 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:33:13.089459   37436 main.go:141] libmachine: (pause-354420) Calling .DriverName
	I0914 22:33:13.089694   37436 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:33:13.089996   37436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:33:13.090048   37436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:33:13.106850   37436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46553
	I0914 22:33:13.107275   37436 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:33:13.107805   37436 main.go:141] libmachine: Using API Version  1
	I0914 22:33:13.107833   37436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:33:13.108179   37436 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:33:13.108379   37436 main.go:141] libmachine: (pause-354420) Calling .DriverName
	I0914 22:33:13.141478   37436 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:33:13.142964   37436 start.go:298] selected driver: kvm2
	I0914 22:33:13.142979   37436 start.go:902] validating driver "kvm2" against &{Name:pause-354420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.1 ClusterName:pause-354420 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:33:13.143108   37436 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:33:13.143411   37436 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:13.143505   37436 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:33:13.157974   37436 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:33:13.158683   37436 cni.go:84] Creating CNI manager for ""
	I0914 22:33:13.158705   37436 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:33:13.158717   37436 start_flags.go:321] config:
	{Name:pause-354420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-354420 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:33:13.158942   37436 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:13.160596   37436 out.go:177] * Starting control plane node pause-354420 in cluster pause-354420
	I0914 22:33:13.162044   37436 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:33:13.162080   37436 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0914 22:33:13.162093   37436 cache.go:57] Caching tarball of preloaded images
	I0914 22:33:13.162170   37436 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 22:33:13.162187   37436 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 22:33:13.162346   37436 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/config.json ...
	I0914 22:33:13.162558   37436 start.go:365] acquiring machines lock for pause-354420: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:33:13.162611   37436 start.go:369] acquired machines lock for "pause-354420" in 30.062µs
	I0914 22:33:13.162630   37436 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:33:13.162638   37436 fix.go:54] fixHost starting: 
	I0914 22:33:13.162917   37436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:33:13.162954   37436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:33:13.177217   37436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35675
	I0914 22:33:13.177639   37436 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:33:13.178046   37436 main.go:141] libmachine: Using API Version  1
	I0914 22:33:13.178067   37436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:33:13.178365   37436 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:33:13.178519   37436 main.go:141] libmachine: (pause-354420) Calling .DriverName
	I0914 22:33:13.178716   37436 main.go:141] libmachine: (pause-354420) Calling .GetState
	I0914 22:33:13.180441   37436 fix.go:102] recreateIfNeeded on pause-354420: state=Running err=<nil>
	W0914 22:33:13.180469   37436 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:33:13.182120   37436 out.go:177] * Updating the running kvm2 "pause-354420" VM ...
	I0914 22:33:13.183398   37436 machine.go:88] provisioning docker machine ...
	I0914 22:33:13.183423   37436 main.go:141] libmachine: (pause-354420) Calling .DriverName
	I0914 22:33:13.183667   37436 main.go:141] libmachine: (pause-354420) Calling .GetMachineName
	I0914 22:33:13.183844   37436 buildroot.go:166] provisioning hostname "pause-354420"
	I0914 22:33:13.183867   37436 main.go:141] libmachine: (pause-354420) Calling .GetMachineName
	I0914 22:33:13.184015   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHHostname
	I0914 22:33:13.186860   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:13.187301   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:13.187331   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:13.187413   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHPort
	I0914 22:33:13.187645   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:13.187837   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:13.187979   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHUsername
	I0914 22:33:13.188114   37436 main.go:141] libmachine: Using SSH client type: native
	I0914 22:33:13.188559   37436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0914 22:33:13.188585   37436 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-354420 && echo "pause-354420" | sudo tee /etc/hostname
	I0914 22:33:13.328220   37436 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-354420
	
	I0914 22:33:13.328260   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHHostname
	I0914 22:33:13.331307   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:13.331588   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:13.331619   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:13.331802   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHPort
	I0914 22:33:13.332122   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:13.332384   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:13.332590   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHUsername
	I0914 22:33:13.332762   37436 main.go:141] libmachine: Using SSH client type: native
	I0914 22:33:13.333160   37436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0914 22:33:13.333189   37436 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-354420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-354420/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-354420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:33:13.453277   37436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:33:13.453305   37436 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:33:13.453329   37436 buildroot.go:174] setting up certificates
	I0914 22:33:13.453340   37436 provision.go:83] configureAuth start
	I0914 22:33:13.453351   37436 main.go:141] libmachine: (pause-354420) Calling .GetMachineName
	I0914 22:33:13.453638   37436 main.go:141] libmachine: (pause-354420) Calling .GetIP
	I0914 22:33:13.456584   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:13.456970   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:13.457005   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:13.457600   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHHostname
	I0914 22:33:13.464335   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:13.464628   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:13.464714   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:13.464862   37436 provision.go:138] copyHostCerts
	I0914 22:33:13.464905   37436 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:33:13.464912   37436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:33:13.464973   37436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:33:13.465086   37436 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:33:13.465092   37436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:33:13.465114   37436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:33:13.465180   37436 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:33:13.465184   37436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:33:13.465202   37436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:33:13.465257   37436 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.pause-354420 san=[192.168.39.45 192.168.39.45 localhost 127.0.0.1 minikube pause-354420]
	I0914 22:33:13.567069   37436 provision.go:172] copyRemoteCerts
	I0914 22:33:13.567129   37436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:33:13.567157   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHHostname
	I0914 22:33:13.570695   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:13.571205   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:13.571255   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:13.571493   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHPort
	I0914 22:33:13.571723   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:13.571924   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHUsername
	I0914 22:33:13.572111   37436 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/pause-354420/id_rsa Username:docker}
	I0914 22:33:13.669471   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:33:13.692668   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:33:13.716727   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 22:33:13.749571   37436 provision.go:86] duration metric: configureAuth took 296.218105ms
	I0914 22:33:13.749602   37436 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:33:13.749819   37436 config.go:182] Loaded profile config "pause-354420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:33:13.749926   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHHostname
	I0914 22:33:14.264720   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:14.265152   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:14.265195   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:14.265422   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHPort
	I0914 22:33:14.265618   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:14.265794   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:14.265924   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHUsername
	I0914 22:33:14.266079   37436 main.go:141] libmachine: Using SSH client type: native
	I0914 22:33:14.266521   37436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0914 22:33:14.266548   37436 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:33:19.873811   37436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:33:19.873840   37436 machine.go:91] provisioned docker machine in 6.690424965s
	I0914 22:33:19.873853   37436 start.go:300] post-start starting for "pause-354420" (driver="kvm2")
	I0914 22:33:19.873897   37436 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:33:19.873929   37436 main.go:141] libmachine: (pause-354420) Calling .DriverName
	I0914 22:33:19.874281   37436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:33:19.874303   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHHostname
	I0914 22:33:19.877440   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:19.877870   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:19.877902   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:19.878021   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHPort
	I0914 22:33:19.878225   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:19.878365   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHUsername
	I0914 22:33:19.878513   37436 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/pause-354420/id_rsa Username:docker}
	I0914 22:33:19.961587   37436 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:33:19.965577   37436 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:33:19.965602   37436 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:33:19.965680   37436 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:33:19.965750   37436 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:33:19.965855   37436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:33:19.975061   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:33:19.996466   37436 start.go:303] post-start completed in 122.602127ms
	I0914 22:33:19.996484   37436 fix.go:56] fixHost completed within 6.833845998s
	I0914 22:33:19.996508   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHHostname
	I0914 22:33:19.999719   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:20.000221   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:20.000257   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:20.000418   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHPort
	I0914 22:33:20.000638   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:20.000880   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:20.001041   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHUsername
	I0914 22:33:20.001214   37436 main.go:141] libmachine: Using SSH client type: native
	I0914 22:33:20.001575   37436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0914 22:33:20.001590   37436 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 22:33:20.111906   37436 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694730800.107166862
	
	I0914 22:33:20.111927   37436 fix.go:206] guest clock: 1694730800.107166862
	I0914 22:33:20.111934   37436 fix.go:219] Guest: 2023-09-14 22:33:20.107166862 +0000 UTC Remote: 2023-09-14 22:33:19.996488675 +0000 UTC m=+6.976904887 (delta=110.678187ms)
	I0914 22:33:20.111968   37436 fix.go:190] guest clock delta is within tolerance: 110.678187ms
	I0914 22:33:20.111974   37436 start.go:83] releasing machines lock for "pause-354420", held for 6.949352676s
	I0914 22:33:20.112000   37436 main.go:141] libmachine: (pause-354420) Calling .DriverName
	I0914 22:33:20.112239   37436 main.go:141] libmachine: (pause-354420) Calling .GetIP
	I0914 22:33:20.115186   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:20.115577   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:20.115611   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:20.115782   37436 main.go:141] libmachine: (pause-354420) Calling .DriverName
	I0914 22:33:20.116482   37436 main.go:141] libmachine: (pause-354420) Calling .DriverName
	I0914 22:33:20.116709   37436 main.go:141] libmachine: (pause-354420) Calling .DriverName
	I0914 22:33:20.116799   37436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:33:20.116842   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHHostname
	I0914 22:33:20.116936   37436 ssh_runner.go:195] Run: cat /version.json
	I0914 22:33:20.116965   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHHostname
	I0914 22:33:20.119731   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:20.119918   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:20.120135   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:20.120164   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:20.120320   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:20.120342   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:20.120346   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHPort
	I0914 22:33:20.120498   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHPort
	I0914 22:33:20.120574   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:20.120675   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHKeyPath
	I0914 22:33:20.120762   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHUsername
	I0914 22:33:20.120785   37436 main.go:141] libmachine: (pause-354420) Calling .GetSSHUsername
	I0914 22:33:20.120906   37436 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/pause-354420/id_rsa Username:docker}
	I0914 22:33:20.120924   37436 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/pause-354420/id_rsa Username:docker}
	I0914 22:33:20.233773   37436 ssh_runner.go:195] Run: systemctl --version
	I0914 22:33:20.239443   37436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:33:20.394347   37436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:33:20.399806   37436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:33:20.399883   37436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:33:20.407276   37436 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 22:33:20.407303   37436 start.go:469] detecting cgroup driver to use...
	I0914 22:33:20.407376   37436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:33:20.421060   37436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:33:20.433101   37436 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:33:20.433148   37436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:33:20.445006   37436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:33:20.456321   37436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:33:20.686673   37436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:33:21.203191   37436 docker.go:212] disabling docker service ...
	I0914 22:33:21.203250   37436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:33:21.237385   37436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:33:21.266530   37436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:33:21.500522   37436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:33:21.770517   37436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:33:21.804003   37436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:33:21.845444   37436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:33:21.845517   37436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:33:21.866731   37436 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:33:21.866783   37436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:33:21.887087   37436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:33:21.908555   37436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:33:21.930766   37436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:33:21.952161   37436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:33:21.970292   37436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:33:21.985289   37436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:33:22.306632   37436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:33:23.629665   37436 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.322997221s)
	I0914 22:33:23.629698   37436 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:33:23.629752   37436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:33:23.642693   37436 start.go:537] Will wait 60s for crictl version
	I0914 22:33:23.642769   37436 ssh_runner.go:195] Run: which crictl
	I0914 22:33:23.681058   37436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:33:23.995648   37436 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:33:23.995799   37436 ssh_runner.go:195] Run: crio --version
	I0914 22:33:24.102448   37436 ssh_runner.go:195] Run: crio --version
	I0914 22:33:24.200094   37436 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:33:24.201493   37436 main.go:141] libmachine: (pause-354420) Calling .GetIP
	I0914 22:33:24.204797   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:24.205152   37436 main.go:141] libmachine: (pause-354420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:97:0d", ip: ""} in network mk-pause-354420: {Iface:virbr1 ExpiryTime:2023-09-14 23:31:38 +0000 UTC Type:0 Mac:52:54:00:da:97:0d Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-354420 Clientid:01:52:54:00:da:97:0d}
	I0914 22:33:24.205183   37436 main.go:141] libmachine: (pause-354420) DBG | domain pause-354420 has defined IP address 192.168.39.45 and MAC address 52:54:00:da:97:0d in network mk-pause-354420
	I0914 22:33:24.205375   37436 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:33:24.214882   37436 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:33:24.214937   37436 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:33:24.288231   37436 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:33:24.288258   37436 crio.go:415] Images already preloaded, skipping extraction
	I0914 22:33:24.288323   37436 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:33:24.341332   37436 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:33:24.341357   37436 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:33:24.341447   37436 ssh_runner.go:195] Run: crio config
	I0914 22:33:24.470898   37436 cni.go:84] Creating CNI manager for ""
	I0914 22:33:24.470925   37436 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:33:24.470949   37436 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:33:24.470976   37436 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.45 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-354420 NodeName:pause-354420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:33:24.471163   37436 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-354420"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:33:24.471254   37436 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-354420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-354420 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:33:24.471330   37436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:33:24.488735   37436 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:33:24.488834   37436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:33:24.505241   37436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0914 22:33:24.525543   37436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:33:24.546001   37436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0914 22:33:24.567497   37436 ssh_runner.go:195] Run: grep 192.168.39.45	control-plane.minikube.internal$ /etc/hosts
	I0914 22:33:24.573980   37436 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420 for IP: 192.168.39.45
	I0914 22:33:24.574013   37436 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:33:24.574192   37436 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:33:24.574235   37436 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:33:24.574301   37436 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/client.key
	I0914 22:33:24.574372   37436 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/apiserver.key.7aba1c1f
	I0914 22:33:24.574411   37436 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/proxy-client.key
	I0914 22:33:24.574516   37436 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:33:24.574543   37436 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:33:24.574554   37436 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:33:24.574577   37436 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:33:24.574598   37436 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:33:24.574619   37436 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:33:24.574656   37436 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:33:24.575238   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:33:24.612118   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:33:24.649064   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:33:24.691175   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 22:33:24.731295   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:33:24.774265   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:33:24.819621   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:33:24.874731   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:33:24.956231   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:33:24.999524   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:33:25.045119   37436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:33:25.092487   37436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:33:25.122814   37436 ssh_runner.go:195] Run: openssl version
	I0914 22:33:25.134156   37436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:33:25.158352   37436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:33:25.168583   37436 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:33:25.168650   37436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:33:25.175654   37436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:33:25.189065   37436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:33:25.214145   37436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:33:25.228368   37436 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:33:25.228437   37436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:33:25.249805   37436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:33:25.278636   37436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:33:25.314020   37436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:33:25.323815   37436 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:33:25.323882   37436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:33:25.331785   37436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:33:25.357691   37436 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:33:25.365538   37436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:33:25.372712   37436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:33:25.383271   37436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:33:25.400440   37436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:33:25.409295   37436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:33:25.424633   37436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:33:25.433321   37436 kubeadm.go:404] StartCluster: {Name:pause-354420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.1 ClusterName:pause-354420 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:33:25.433426   37436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:33:25.433495   37436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:33:25.486876   37436 cri.go:89] found id: "c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071"
	I0914 22:33:25.486911   37436 cri.go:89] found id: "f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994"
	I0914 22:33:25.486921   37436 cri.go:89] found id: "048be7106fa999814eb7a2fb026393114de8e03f362665dcf779928a67d5ae4f"
	I0914 22:33:25.486929   37436 cri.go:89] found id: "bbfe789f49af64f3ca869ac92eb3cd7c2712b87e95c17dc8e50441ec22b412b0"
	I0914 22:33:25.486936   37436 cri.go:89] found id: "1a3d375588d64dc0d11e65e0b9b23aa9546b4341374c16632ef78ecbc6c6b9f5"
	I0914 22:33:25.486944   37436 cri.go:89] found id: "5bfde326bfdc7b5db1720daeb59521f4ebbda892ffb8d647e0d9199399589686"
	I0914 22:33:25.486953   37436 cri.go:89] found id: ""
	I0914 22:33:25.487022   37436 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-354420 -n pause-354420
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-354420 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-354420 logs -n 25: (1.179625054s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC | 14 Sep 23 22:30 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC | 14 Sep 23 22:30 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC | 14 Sep 23 22:31 UTC |
	| start   | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-948115         | offline-crio-948115       | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC | 14 Sep 23 22:33 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-354420 --memory=2048  | pause-354420              | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC | 14 Sep 23 22:33 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC | 14 Sep 23 22:33 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:33 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-354420                | pause-354420              | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:34 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:33 UTC |
	| start   | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:33 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-948115         | offline-crio-948115       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:33 UTC |
	| start   | -p kubernetes-upgrade-711912   | kubernetes-upgrade-711912 | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-995756      | running-upgrade-995756    | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-982498 sudo    | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:33 UTC |
	| start   | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:33:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:33:51.688445   38243 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:33:51.688748   38243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:33:51.688753   38243 out.go:309] Setting ErrFile to fd 2...
	I0914 22:33:51.688758   38243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:33:51.689029   38243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:33:51.689687   38243 out.go:303] Setting JSON to false
	I0914 22:33:51.690937   38243 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4574,"bootTime":1694726258,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:33:51.691007   38243 start.go:138] virtualization: kvm guest
	I0914 22:33:51.693268   38243 out.go:177] * [NoKubernetes-982498] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:33:51.694984   38243 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:33:51.695036   38243 notify.go:220] Checking for updates...
	I0914 22:33:51.696741   38243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:33:51.698423   38243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:33:51.700241   38243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:33:51.701701   38243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:33:51.703192   38243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:33:51.705168   38243 config.go:182] Loaded profile config "NoKubernetes-982498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0914 22:33:51.705584   38243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:33:51.705638   38243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:33:51.720694   38243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I0914 22:33:51.721124   38243 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:33:51.722043   38243 main.go:141] libmachine: Using API Version  1
	I0914 22:33:51.722058   38243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:33:51.723439   38243 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:33:51.723834   38243 main.go:141] libmachine: (NoKubernetes-982498) Calling .DriverName
	I0914 22:33:51.724058   38243 start.go:1720] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0914 22:33:51.724079   38243 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:33:51.724410   38243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:33:51.724439   38243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:33:51.740985   38243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45453
	I0914 22:33:51.741424   38243 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:33:51.741897   38243 main.go:141] libmachine: Using API Version  1
	I0914 22:33:51.741911   38243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:33:51.742250   38243 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:33:51.742464   38243 main.go:141] libmachine: (NoKubernetes-982498) Calling .DriverName
	I0914 22:33:51.780240   38243 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:33:51.781734   38243 start.go:298] selected driver: kvm2
	I0914 22:33:51.781739   38243 start.go:902] validating driver "kvm2" against &{Name:NoKubernetes-982498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-982498 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.168 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:33:51.781827   38243 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:33:51.782114   38243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:51.782189   38243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:33:51.797132   38243 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:33:51.797858   38243 cni.go:84] Creating CNI manager for ""
	I0914 22:33:51.797871   38243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:33:51.797883   38243 start_flags.go:321] config:
	{Name:NoKubernetes-982498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-982498 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.168 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:33:51.798015   38243 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:51.800027   38243 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-982498
	I0914 22:33:48.150767   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:48.151247   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:48.151278   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:48.151203   37979 retry.go:31] will retry after 719.072833ms: waiting for machine to come up
	I0914 22:33:48.871415   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:48.871936   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:48.871962   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:48.871914   37979 retry.go:31] will retry after 1.25318085s: waiting for machine to come up
	I0914 22:33:50.126396   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:50.126889   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:50.126920   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:50.126850   37979 retry.go:31] will retry after 1.801046185s: waiting for machine to come up
	I0914 22:33:51.929241   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:51.929779   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:51.929808   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:51.929733   37979 retry.go:31] will retry after 2.070618875s: waiting for machine to come up
	I0914 22:33:49.880212   37436 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:33:49.891355   37436 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:33:49.912114   37436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:33:49.928157   37436 system_pods.go:59] 6 kube-system pods found
	I0914 22:33:49.928217   37436 system_pods.go:61] "coredns-5dd5756b68-6q49n" [b0fa85bd-c439-4a5a-9e2a-552faa59e3c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:33:49.928241   37436 system_pods.go:61] "etcd-pause-354420" [1378a8dc-5ec6-405e-ab63-77299387c832] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:33:49.928268   37436 system_pods.go:61] "kube-apiserver-pause-354420" [7bb1b6a6-5647-4ae6-a1f1-e022bdd395e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:33:49.928289   37436 system_pods.go:61] "kube-controller-manager-pause-354420" [78b07d77-93ca-4961-b103-1ea6754fd60d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:33:49.928307   37436 system_pods.go:61] "kube-proxy-fzt4z" [cba0aa8a-8a13-414c-8d84-7de5a8f6b945] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:33:49.928325   37436 system_pods.go:61] "kube-scheduler-pause-354420" [1752fa56-321d-41a0-b1a2-798db32e0b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:33:49.928341   37436 system_pods.go:74] duration metric: took 16.21046ms to wait for pod list to return data ...
	I0914 22:33:49.928358   37436 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:33:49.931929   37436 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:33:49.931962   37436 node_conditions.go:123] node cpu capacity is 2
	I0914 22:33:49.931975   37436 node_conditions.go:105] duration metric: took 3.603904ms to run NodePressure ...
	I0914 22:33:49.931994   37436 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:33:50.177529   37436 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:33:50.182885   37436 kubeadm.go:787] kubelet initialised
	I0914 22:33:50.182910   37436 kubeadm.go:788] duration metric: took 5.350492ms waiting for restarted kubelet to initialise ...
	I0914 22:33:50.182920   37436 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:33:50.187974   37436 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace to be "Ready" ...
	I0914 22:33:50.714445   37436 pod_ready.go:92] pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace has status "Ready":"True"
	I0914 22:33:50.714473   37436 pod_ready.go:81] duration metric: took 526.47739ms waiting for pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace to be "Ready" ...
	I0914 22:33:50.714485   37436 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:33:52.735854   37436 pod_ready.go:102] pod "etcd-pause-354420" in "kube-system" namespace has status "Ready":"False"
	I0914 22:33:51.801491   38243 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0914 22:33:52.208916   38243 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0914 22:33:52.209044   38243 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/NoKubernetes-982498/config.json ...
	I0914 22:33:52.209321   38243 start.go:365] acquiring machines lock for NoKubernetes-982498: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:33:54.001647   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:54.002264   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:54.002295   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:54.002214   37979 retry.go:31] will retry after 2.543119841s: waiting for machine to come up
	I0914 22:33:56.548671   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:56.549231   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:56.549272   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:56.549198   37979 retry.go:31] will retry after 2.444315254s: waiting for machine to come up
	I0914 22:33:55.236233   37436 pod_ready.go:102] pod "etcd-pause-354420" in "kube-system" namespace has status "Ready":"False"
	I0914 22:33:57.234406   37436 pod_ready.go:92] pod "etcd-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:33:57.234431   37436 pod_ready.go:81] duration metric: took 6.519938592s waiting for pod "etcd-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:33:57.234440   37436 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:33:59.252328   37436 pod_ready.go:102] pod "kube-apiserver-pause-354420" in "kube-system" namespace has status "Ready":"False"
	I0914 22:34:00.757125   37436 pod_ready.go:92] pod "kube-apiserver-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:00.757152   37436 pod_ready.go:81] duration metric: took 3.522704525s waiting for pod "kube-apiserver-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.757170   37436 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.762482   37436 pod_ready.go:92] pod "kube-controller-manager-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:00.762505   37436 pod_ready.go:81] duration metric: took 5.327268ms waiting for pod "kube-controller-manager-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.762514   37436 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fzt4z" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.769045   37436 pod_ready.go:92] pod "kube-proxy-fzt4z" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:00.769063   37436 pod_ready.go:81] duration metric: took 6.543741ms waiting for pod "kube-proxy-fzt4z" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.769071   37436 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.774138   37436 pod_ready.go:92] pod "kube-scheduler-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:00.774162   37436 pod_ready.go:81] duration metric: took 5.08424ms waiting for pod "kube-scheduler-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.774173   37436 pod_ready.go:38] duration metric: took 10.591241383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:34:00.774195   37436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:34:00.789845   37436 ops.go:34] apiserver oom_adj: -16
	I0914 22:34:00.789864   37436 kubeadm.go:640] restartCluster took 35.135131308s
	I0914 22:34:00.789873   37436 kubeadm.go:406] StartCluster complete in 35.356557623s
	I0914 22:34:00.789891   37436 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:34:00.789979   37436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:34:00.790681   37436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:34:00.790898   37436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:34:00.791044   37436 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:34:00.791146   37436 config.go:182] Loaded profile config "pause-354420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:34:00.792935   37436 out.go:177] * Enabled addons: 
	I0914 22:34:00.791452   37436 kapi.go:59] client config for pause-354420: &rest.Config{Host:"https://192.168.39.45:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:34:00.794450   37436 addons.go:502] enable addons completed in 3.413911ms: enabled=[]
	I0914 22:34:00.798282   37436 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-354420" context rescaled to 1 replicas
	I0914 22:34:00.798310   37436 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:34:00.799853   37436 out.go:177] * Verifying Kubernetes components...
	I0914 22:33:58.995303   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:58.995778   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:58.995836   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:58.995711   37979 retry.go:31] will retry after 3.712127836s: waiting for machine to come up
	I0914 22:34:02.712396   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:34:02.713003   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:34:02.713034   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:34:02.712949   37979 retry.go:31] will retry after 4.412404699s: waiting for machine to come up
	I0914 22:34:00.801237   37436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:34:00.912310   37436 node_ready.go:35] waiting up to 6m0s for node "pause-354420" to be "Ready" ...
	I0914 22:34:00.912323   37436 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:34:00.915176   37436 node_ready.go:49] node "pause-354420" has status "Ready":"True"
	I0914 22:34:00.915191   37436 node_ready.go:38] duration metric: took 2.849627ms waiting for node "pause-354420" to be "Ready" ...
	I0914 22:34:00.915199   37436 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:34:01.118971   37436 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:01.516913   37436 pod_ready.go:92] pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:01.516940   37436 pod_ready.go:81] duration metric: took 397.941577ms waiting for pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:01.516953   37436 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:01.916702   37436 pod_ready.go:92] pod "etcd-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:01.916727   37436 pod_ready.go:81] duration metric: took 399.767579ms waiting for pod "etcd-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:01.916738   37436 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:02.317161   37436 pod_ready.go:92] pod "kube-apiserver-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:02.317185   37436 pod_ready.go:81] duration metric: took 400.439277ms waiting for pod "kube-apiserver-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:02.317197   37436 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:02.716542   37436 pod_ready.go:92] pod "kube-controller-manager-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:02.716563   37436 pod_ready.go:81] duration metric: took 399.358595ms waiting for pod "kube-controller-manager-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:02.716572   37436 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fzt4z" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:03.115713   37436 pod_ready.go:92] pod "kube-proxy-fzt4z" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:03.115738   37436 pod_ready.go:81] duration metric: took 399.159745ms waiting for pod "kube-proxy-fzt4z" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:03.115747   37436 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:03.517207   37436 pod_ready.go:92] pod "kube-scheduler-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:03.517234   37436 pod_ready.go:81] duration metric: took 401.479188ms waiting for pod "kube-scheduler-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:03.517257   37436 pod_ready.go:38] duration metric: took 2.60204941s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:34:03.517277   37436 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:34:03.517336   37436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:34:03.530335   37436 api_server.go:72] duration metric: took 2.732004295s to wait for apiserver process to appear ...
	I0914 22:34:03.530362   37436 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:34:03.530394   37436 api_server.go:253] Checking apiserver healthz at https://192.168.39.45:8443/healthz ...
	I0914 22:34:03.535289   37436 api_server.go:279] https://192.168.39.45:8443/healthz returned 200:
	ok
	I0914 22:34:03.536480   37436 api_server.go:141] control plane version: v1.28.1
	I0914 22:34:03.536506   37436 api_server.go:131] duration metric: took 6.136853ms to wait for apiserver health ...
	I0914 22:34:03.536516   37436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:34:03.721078   37436 system_pods.go:59] 6 kube-system pods found
	I0914 22:34:03.721114   37436 system_pods.go:61] "coredns-5dd5756b68-6q49n" [b0fa85bd-c439-4a5a-9e2a-552faa59e3c0] Running
	I0914 22:34:03.721122   37436 system_pods.go:61] "etcd-pause-354420" [1378a8dc-5ec6-405e-ab63-77299387c832] Running
	I0914 22:34:03.721129   37436 system_pods.go:61] "kube-apiserver-pause-354420" [7bb1b6a6-5647-4ae6-a1f1-e022bdd395e0] Running
	I0914 22:34:03.721136   37436 system_pods.go:61] "kube-controller-manager-pause-354420" [78b07d77-93ca-4961-b103-1ea6754fd60d] Running
	I0914 22:34:03.721143   37436 system_pods.go:61] "kube-proxy-fzt4z" [cba0aa8a-8a13-414c-8d84-7de5a8f6b945] Running
	I0914 22:34:03.721149   37436 system_pods.go:61] "kube-scheduler-pause-354420" [1752fa56-321d-41a0-b1a2-798db32e0b95] Running
	I0914 22:34:03.721156   37436 system_pods.go:74] duration metric: took 184.634034ms to wait for pod list to return data ...
	I0914 22:34:03.721164   37436 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:34:03.916296   37436 default_sa.go:45] found service account: "default"
	I0914 22:34:03.916330   37436 default_sa.go:55] duration metric: took 195.15841ms for default service account to be created ...
	I0914 22:34:03.916341   37436 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:34:04.121966   37436 system_pods.go:86] 6 kube-system pods found
	I0914 22:34:04.122000   37436 system_pods.go:89] "coredns-5dd5756b68-6q49n" [b0fa85bd-c439-4a5a-9e2a-552faa59e3c0] Running
	I0914 22:34:04.122009   37436 system_pods.go:89] "etcd-pause-354420" [1378a8dc-5ec6-405e-ab63-77299387c832] Running
	I0914 22:34:04.122016   37436 system_pods.go:89] "kube-apiserver-pause-354420" [7bb1b6a6-5647-4ae6-a1f1-e022bdd395e0] Running
	I0914 22:34:04.122023   37436 system_pods.go:89] "kube-controller-manager-pause-354420" [78b07d77-93ca-4961-b103-1ea6754fd60d] Running
	I0914 22:34:04.122031   37436 system_pods.go:89] "kube-proxy-fzt4z" [cba0aa8a-8a13-414c-8d84-7de5a8f6b945] Running
	I0914 22:34:04.122036   37436 system_pods.go:89] "kube-scheduler-pause-354420" [1752fa56-321d-41a0-b1a2-798db32e0b95] Running
	I0914 22:34:04.122045   37436 system_pods.go:126] duration metric: took 205.698752ms to wait for k8s-apps to be running ...
	I0914 22:34:04.122061   37436 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:34:04.122112   37436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:34:04.136919   37436 system_svc.go:56] duration metric: took 14.85116ms WaitForService to wait for kubelet.
	I0914 22:34:04.136946   37436 kubeadm.go:581] duration metric: took 3.338618818s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:34:04.136969   37436 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:34:04.315762   37436 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:34:04.315786   37436 node_conditions.go:123] node cpu capacity is 2
	I0914 22:34:04.315797   37436 node_conditions.go:105] duration metric: took 178.823037ms to run NodePressure ...
	I0914 22:34:04.315811   37436 start.go:228] waiting for startup goroutines ...
	I0914 22:34:04.315817   37436 start.go:233] waiting for cluster config update ...
	I0914 22:34:04.315823   37436 start.go:242] writing updated cluster config ...
	I0914 22:34:04.316125   37436 ssh_runner.go:195] Run: rm -f paused
	I0914 22:34:04.362656   37436 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:34:04.364723   37436 out.go:177] * Done! kubectl is now configured to use "pause-354420" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:31:34 UTC, ends at Thu 2023-09-14 22:34:05 UTC. --
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.845520026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b9aec1e0-e521-4a6c-ad24-7267851505cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.845803684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b9aec1e0-e521-4a6c-ad24-7267851505cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.861386653Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=981cb9dc-1a78-4c98-b38a-0adaaed27f7c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.862641741Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-6q49n,Uid:b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803885719605,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:32:30.793568291Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-354420,Uid:297350e03cbdb80c3730eb7ffa543bdc,Namespace:kube-system,
Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803855433045,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 297350e03cbdb80c3730eb7ffa543bdc,kubernetes.io/config.seen: 2023-09-14T22:32:15.198541757Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-354420,Uid:d01539766913262579c92cafe9e2828a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803829877985,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d015397669132
62579c92cafe9e2828a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d01539766913262579c92cafe9e2828a,kubernetes.io/config.seen: 2023-09-14T22:32:15.198540739Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-354420,Uid:ed4793bb373a24bab68281d3d96396f0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803817564369,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96396f0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.45:8443,kubernetes.io/config.hash: ed4793bb373a24bab68281d3d96396f0,kubernetes.io/config.seen: 2023-09-14T22:32:15.198539413Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&PodSandboxMetadata{Name:kube-proxy-fzt4z,Uid:cba0aa8a-8a13-414c-8d84-7de5a8f6b945,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803774415968,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:32:30.535986605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&PodSandboxMetadata{Name:etcd-pause-354420,Uid:9dead96b6951fd34fa6c7770070f6de9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803707586120,Labels:map[string]string{component: etcd,io.kubernetes.contain
er.name: POD,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.45:2379,kubernetes.io/config.hash: 9dead96b6951fd34fa6c7770070f6de9,kubernetes.io/config.seen: 2023-09-14T22:32:15.198534529Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=981cb9dc-1a78-4c98-b38a-0adaaed27f7c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.863478936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d9770378-1cc6-43e7-b242-9d9fcf6f5454 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.863563103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d9770378-1cc6-43e7-b242-9d9fcf6f5454 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.863775722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d9770378-1cc6-43e7-b242-9d9fcf6f5454 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.890957114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eca38a5a-f8da-4df7-9ee8-5f529a3aa0eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.891036643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eca38a5a-f8da-4df7-9ee8-5f529a3aa0eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.891430442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eca38a5a-f8da-4df7-9ee8-5f529a3aa0eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.926629420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=10fdd7c9-a24d-45f5-ac36-d96bc6da259a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.926699850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=10fdd7c9-a24d-45f5-ac36-d96bc6da259a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.927003634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=10fdd7c9-a24d-45f5-ac36-d96bc6da259a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.965924906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a08eff03-cc62-489a-81d7-675b56b78bcf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.965991098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a08eff03-cc62-489a-81d7-675b56b78bcf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:04 pause-354420 crio[2468]: time="2023-09-14 22:34:04.966365069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a08eff03-cc62-489a-81d7-675b56b78bcf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:05 pause-354420 crio[2468]: time="2023-09-14 22:34:05.000491843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ee9c47c0-7594-462b-b127-79bd632655d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:05 pause-354420 crio[2468]: time="2023-09-14 22:34:05.000560866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ee9c47c0-7594-462b-b127-79bd632655d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:05 pause-354420 crio[2468]: time="2023-09-14 22:34:05.003614491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ee9c47c0-7594-462b-b127-79bd632655d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:05 pause-354420 crio[2468]: time="2023-09-14 22:34:05.045428299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0d9e9d05-050a-485b-9157-e143aaef451b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:05 pause-354420 crio[2468]: time="2023-09-14 22:34:05.045504012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0d9e9d05-050a-485b-9157-e143aaef451b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:05 pause-354420 crio[2468]: time="2023-09-14 22:34:05.045769686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0d9e9d05-050a-485b-9157-e143aaef451b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:05 pause-354420 crio[2468]: time="2023-09-14 22:34:05.089556923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eb8b5bd4-f6cd-4275-a076-83fd1c3603b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:05 pause-354420 crio[2468]: time="2023-09-14 22:34:05.089631879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eb8b5bd4-f6cd-4275-a076-83fd1c3603b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:05 pause-354420 crio[2468]: time="2023-09-14 22:34:05.089909200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eb8b5bd4-f6cd-4275-a076-83fd1c3603b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	549c4bcbd5a5a       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   16 seconds ago      Running             kube-proxy                2                   4f085cf201ab5
	50d599bcafec5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 seconds ago      Running             coredns                   2                   cba77754f8aa4
	248173e88a97f       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   21 seconds ago      Running             kube-controller-manager   2                   e208b25b7dab0
	fcf27877f3f6e       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   21 seconds ago      Running             kube-apiserver            2                   907db6fd00e29
	05536c50916c5       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   21 seconds ago      Running             kube-scheduler            2                   380c798a07f95
	40bdcfdd5a6b6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   21 seconds ago      Running             etcd                      2                   e4a8fecbd6a9e
	51c99507fd796       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   38 seconds ago      Exited              kube-proxy                1                   4f085cf201ab5
	c3dea3ffbbb69       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   39 seconds ago      Exited              coredns                   1                   cba77754f8aa4
	8c3f0e0d0a40e       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   39 seconds ago      Exited              kube-scheduler            1                   380c798a07f95
	a8a309a10f0b1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   40 seconds ago      Exited              etcd                      1                   e4a8fecbd6a9e
	c3bb34a9679cf       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   42 seconds ago      Exited              kube-apiserver            1                   c3cf145a2a4a8
	f070228f60de6       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   43 seconds ago      Exited              kube-controller-manager   1                   fc8d450e9e756
	
	* 
	* ==> coredns [50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38275 - 9579 "HINFO IN 1798067197754382644.147329406833308390. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009778785s
	
	* 
	* ==> coredns [c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45168 - 57216 "HINFO IN 7826584772612106630.2764886595922553763. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017483474s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-354420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-354420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=pause-354420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_32_15_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:32:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-354420
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:33:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:33:48 +0000   Thu, 14 Sep 2023 22:32:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:33:48 +0000   Thu, 14 Sep 2023 22:32:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:33:48 +0000   Thu, 14 Sep 2023 22:32:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:33:48 +0000   Thu, 14 Sep 2023 22:32:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    pause-354420
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 efceada1c2ec4e6bac3ac597d710f28f
	  System UUID:                efceada1-c2ec-4e6b-ac3a-c597d710f28f
	  Boot ID:                    fe479ebf-91e2-45ca-a6a2-6572d134f3ac
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-6q49n                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     95s
	  kube-system                 etcd-pause-354420                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         110s
	  kube-system                 kube-apiserver-pause-354420             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-controller-manager-pause-354420    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-proxy-fzt4z                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-scheduler-pause-354420             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node pause-354420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node pause-354420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node pause-354420 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node pause-354420 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node pause-354420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node pause-354420 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  110s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                110s                 kubelet          Node pause-354420 status is now: NodeReady
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           98s                  node-controller  Node pause-354420 event: Registered Node pause-354420 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 23s)    kubelet          Node pause-354420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 23s)    kubelet          Node pause-354420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 23s)    kubelet          Node pause-354420 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node pause-354420 event: Registered Node pause-354420 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063737] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.303676] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.013408] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135455] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.961109] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +14.015268] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.123952] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.155931] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.116630] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.224445] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[Sep14 22:32] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[ +11.317298] systemd-fstab-generator[1267]: Ignoring "noauto" for root device
	[Sep14 22:33] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.347195] systemd-fstab-generator[2064]: Ignoring "noauto" for root device
	[  +0.504164] systemd-fstab-generator[2237]: Ignoring "noauto" for root device
	[  +0.336216] systemd-fstab-generator[2257]: Ignoring "noauto" for root device
	[  +0.237590] systemd-fstab-generator[2273]: Ignoring "noauto" for root device
	[  +0.483914] systemd-fstab-generator[2320]: Ignoring "noauto" for root device
	[ +20.453566] systemd-fstab-generator[3287]: Ignoring "noauto" for root device
	[  +7.183628] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e] <==
	* {"level":"info","ts":"2023-09-14T22:33:45.997544Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:33:45.997573Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:33:45.997582Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:33:45.997695Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.45:2380"}
	{"level":"info","ts":"2023-09-14T22:33:45.997703Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.45:2380"}
	{"level":"info","ts":"2023-09-14T22:33:46.266876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:46.266971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:46.267008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce received MsgPreVoteResp from d386e7203fab19ce at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:46.267025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became candidate at term 4"}
	{"level":"info","ts":"2023-09-14T22:33:46.267034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce received MsgVoteResp from d386e7203fab19ce at term 4"}
	{"level":"info","ts":"2023-09-14T22:33:46.267047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became leader at term 4"}
	{"level":"info","ts":"2023-09-14T22:33:46.267057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d386e7203fab19ce elected leader d386e7203fab19ce at term 4"}
	{"level":"info","ts":"2023-09-14T22:33:46.274938Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d386e7203fab19ce","local-member-attributes":"{Name:pause-354420 ClientURLs:[https://192.168.39.45:2379]}","request-path":"/0/members/d386e7203fab19ce/attributes","cluster-id":"34c61d36ecc5c83e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:33:46.276198Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:33:46.277541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.45:2379"}
	{"level":"info","ts":"2023-09-14T22:33:46.277635Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:33:46.28172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:33:46.282208Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:33:46.28226Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T22:33:56.024931Z","caller":"traceutil/trace.go:171","msg":"trace[245118925] linearizableReadLoop","detail":"{readStateIndex:533; appliedIndex:532; }","duration":"299.753277ms","start":"2023-09-14T22:33:55.725164Z","end":"2023-09-14T22:33:56.024917Z","steps":["trace[245118925] 'read index received'  (duration: 299.566021ms)","trace[245118925] 'applied index is now lower than readState.Index'  (duration: 186.691µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T22:33:56.025132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.982276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-354420\" ","response":"range_response_count:1 size:5468"}
	{"level":"info","ts":"2023-09-14T22:33:56.025179Z","caller":"traceutil/trace.go:171","msg":"trace[812360223] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-354420; range_end:; response_count:1; response_revision:489; }","duration":"300.092267ms","start":"2023-09-14T22:33:55.725076Z","end":"2023-09-14T22:33:56.025168Z","steps":["trace[812360223] 'agreement among raft nodes before linearized reading'  (duration: 299.918663ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:33:56.025203Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T22:33:55.725065Z","time spent":"300.132021ms","remote":"127.0.0.1:38550","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5491,"request content":"key:\"/registry/pods/kube-system/etcd-pause-354420\" "}
	{"level":"info","ts":"2023-09-14T22:33:56.025401Z","caller":"traceutil/trace.go:171","msg":"trace[1213416699] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"301.520337ms","start":"2023-09-14T22:33:55.723863Z","end":"2023-09-14T22:33:56.025383Z","steps":["trace[1213416699] 'process raft request'  (duration: 300.91771ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:33:56.026014Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T22:33:55.723846Z","time spent":"301.742469ms","remote":"127.0.0.1:38550","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6422,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-354420\" mod_revision:437 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-354420\" value_size:6351 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-354420\" > >"}
	
	* 
	* ==> etcd [a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30] <==
	* {"level":"info","ts":"2023-09-14T22:33:26.297561Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T22:33:28.170581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-14T22:33:28.170722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-14T22:33:28.170774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce received MsgPreVoteResp from d386e7203fab19ce at term 2"}
	{"level":"info","ts":"2023-09-14T22:33:28.170813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:28.170845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce received MsgVoteResp from d386e7203fab19ce at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:28.170904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became leader at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:28.170937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d386e7203fab19ce elected leader d386e7203fab19ce at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:28.17787Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d386e7203fab19ce","local-member-attributes":"{Name:pause-354420 ClientURLs:[https://192.168.39.45:2379]}","request-path":"/0/members/d386e7203fab19ce/attributes","cluster-id":"34c61d36ecc5c83e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:33:28.177956Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:33:28.178271Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:33:28.178324Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T22:33:28.17835Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:33:28.179476Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:33:28.179934Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.45:2379"}
	{"level":"info","ts":"2023-09-14T22:33:41.226227Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-14T22:33:41.226321Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-354420","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.45:2380"],"advertise-client-urls":["https://192.168.39.45:2379"]}
	{"level":"warn","ts":"2023-09-14T22:33:41.226454Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:33:41.226523Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:33:41.228857Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.45:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:33:41.229563Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.45:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-14T22:33:41.230721Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d386e7203fab19ce","current-leader-member-id":"d386e7203fab19ce"}
	{"level":"info","ts":"2023-09-14T22:33:41.235719Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.45:2380"}
	{"level":"info","ts":"2023-09-14T22:33:41.236024Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.45:2380"}
	{"level":"info","ts":"2023-09-14T22:33:41.236186Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-354420","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.45:2380"],"advertise-client-urls":["https://192.168.39.45:2379"]}
	
	* 
	* ==> kernel <==
	*  22:34:05 up 2 min,  0 users,  load average: 1.34, 0.62, 0.23
	Linux pause-354420 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071] <==
	* 
	* 
	* ==> kube-apiserver [fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530] <==
	* I0914 22:33:48.251850       1 establishing_controller.go:76] Starting EstablishingController
	I0914 22:33:48.251872       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0914 22:33:48.251885       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0914 22:33:48.251899       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0914 22:33:48.410845       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 22:33:48.433675       1 shared_informer.go:318] Caches are synced for configmaps
	I0914 22:33:48.434037       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 22:33:48.442958       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0914 22:33:48.443000       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 22:33:48.447699       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 22:33:48.447817       1 aggregator.go:166] initial CRD sync complete...
	I0914 22:33:48.447854       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 22:33:48.447862       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 22:33:48.447868       1 cache.go:39] Caches are synced for autoregister controller
	I0914 22:33:48.452601       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0914 22:33:48.460651       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 22:33:48.512480       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 22:33:49.222889       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 22:33:50.037776       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 22:33:50.072288       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 22:33:50.122472       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 22:33:50.151577       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 22:33:50.164414       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 22:34:00.658341       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 22:34:00.731078       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406] <==
	* I0914 22:34:00.637584       1 taint_manager.go:211] "Sending events to api server"
	I0914 22:34:00.637634       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0914 22:34:00.638406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="395.987µs"
	I0914 22:34:00.638458       1 shared_informer.go:318] Caches are synced for cronjob
	I0914 22:34:00.638489       1 shared_informer.go:318] Caches are synced for deployment
	I0914 22:34:00.638783       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-354420"
	I0914 22:34:00.638879       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0914 22:34:00.639713       1 event.go:307] "Event occurred" object="pause-354420" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-354420 event: Registered Node pause-354420 in Controller"
	I0914 22:34:00.640469       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0914 22:34:00.640652       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0914 22:34:00.643608       1 shared_informer.go:318] Caches are synced for PV protection
	I0914 22:34:00.643796       1 shared_informer.go:318] Caches are synced for TTL
	I0914 22:34:00.652634       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 22:34:00.654183       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 22:34:00.654845       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 22:34:00.655665       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 22:34:00.698201       1 shared_informer.go:318] Caches are synced for endpoint
	I0914 22:34:00.708339       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0914 22:34:00.752800       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 22:34:00.752899       1 shared_informer.go:318] Caches are synced for HPA
	I0914 22:34:00.781324       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 22:34:00.801789       1 shared_informer.go:318] Caches are synced for crt configmap
	I0914 22:34:01.217724       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 22:34:01.217863       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0914 22:34:01.241302       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994] <==
	* 
	* 
	* ==> kube-proxy [51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91] <==
	* I0914 22:33:26.865426       1 server_others.go:69] "Using iptables proxy"
	E0914 22:33:26.868512       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-354420": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:28.002026       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-354420": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:30.276174       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-354420": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:34.685243       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-354420": dial tcp 192.168.39.45:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a] <==
	* I0914 22:33:49.433724       1 server_others.go:69] "Using iptables proxy"
	I0914 22:33:49.452412       1 node.go:141] Successfully retrieved node IP: 192.168.39.45
	I0914 22:33:49.524249       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:33:49.524309       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:33:49.533679       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:33:49.533783       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:33:49.534025       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:33:49.534041       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:33:49.536177       1 config.go:188] "Starting service config controller"
	I0914 22:33:49.536243       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:33:49.536278       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:33:49.536284       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:33:49.536856       1 config.go:315] "Starting node config controller"
	I0914 22:33:49.536865       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:33:49.637385       1 shared_informer.go:318] Caches are synced for node config
	I0914 22:33:49.637430       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:33:49.637461       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287] <==
	* I0914 22:33:46.072359       1 serving.go:348] Generated self-signed cert in-memory
	W0914 22:33:48.304007       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 22:33:48.304151       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:33:48.304164       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:33:48.304281       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:33:48.378780       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:33:48.378832       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:33:48.387289       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:33:48.387673       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:33:48.387754       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:33:48.387773       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:33:48.488971       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed] <==
	* E0914 22:33:35.141353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.45:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:35.735936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.45:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:35.736182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.45:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:35.966652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.45:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:35.966708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.45:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.242782       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.45:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.242904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.45:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.518650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.45:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.518707       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.45:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.525304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.45:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.525367       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.45:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.558848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.45:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.558942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.45:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.720727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.45:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.720786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.45:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.904783       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.45:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.904875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.45:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:37.464302       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.45:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:37.464404       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.45:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:37.576017       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.45:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:37.576078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.45:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:41.043589       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0914 22:33:41.044778       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0914 22:33:41.044885       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0914 22:33:41.045005       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:31:34 UTC, ends at Thu 2023-09-14 22:34:05 UTC. --
	Sep 14 22:33:43 pause-354420 kubelet[3293]: E0914 22:33:43.561185    3293 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.45:8443: connect: connection refused" node="pause-354420"
	Sep 14 22:33:43 pause-354420 kubelet[3293]: W0914 22:33:43.687411    3293 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:43 pause-354420 kubelet[3293]: E0914 22:33:43.687499    3293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:43 pause-354420 kubelet[3293]: W0914 22:33:43.935303    3293 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:43 pause-354420 kubelet[3293]: E0914 22:33:43.935414    3293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:43 pause-354420 kubelet[3293]: W0914 22:33:43.948296    3293 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-354420&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:43 pause-354420 kubelet[3293]: E0914 22:33:43.948375    3293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-354420&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:44 pause-354420 kubelet[3293]: E0914 22:33:44.247844    3293 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-354420?timeout=10s\": dial tcp 192.168.39.45:8443: connect: connection refused" interval="1.6s"
	Sep 14 22:33:44 pause-354420 kubelet[3293]: W0914 22:33:44.341711    3293 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:44 pause-354420 kubelet[3293]: E0914 22:33:44.341835    3293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:44 pause-354420 kubelet[3293]: I0914 22:33:44.363463    3293 kubelet_node_status.go:70] "Attempting to register node" node="pause-354420"
	Sep 14 22:33:44 pause-354420 kubelet[3293]: E0914 22:33:44.364164    3293 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.45:8443: connect: connection refused" node="pause-354420"
	Sep 14 22:33:45 pause-354420 kubelet[3293]: I0914 22:33:45.966272    3293 kubelet_node_status.go:70] "Attempting to register node" node="pause-354420"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.486049    3293 kubelet_node_status.go:108] "Node was previously registered" node="pause-354420"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.486229    3293 kubelet_node_status.go:73] "Successfully registered node" node="pause-354420"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.488760    3293 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.489966    3293 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.798898    3293 apiserver.go:52] "Watching apiserver"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.802775    3293 topology_manager.go:215] "Topology Admit Handler" podUID="cba0aa8a-8a13-414c-8d84-7de5a8f6b945" podNamespace="kube-system" podName="kube-proxy-fzt4z"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.802965    3293 topology_manager.go:215] "Topology Admit Handler" podUID="b0fa85bd-c439-4a5a-9e2a-552faa59e3c0" podNamespace="kube-system" podName="coredns-5dd5756b68-6q49n"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.840935    3293 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.927338    3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cba0aa8a-8a13-414c-8d84-7de5a8f6b945-xtables-lock\") pod \"kube-proxy-fzt4z\" (UID: \"cba0aa8a-8a13-414c-8d84-7de5a8f6b945\") " pod="kube-system/kube-proxy-fzt4z"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.927399    3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cba0aa8a-8a13-414c-8d84-7de5a8f6b945-lib-modules\") pod \"kube-proxy-fzt4z\" (UID: \"cba0aa8a-8a13-414c-8d84-7de5a8f6b945\") " pod="kube-system/kube-proxy-fzt4z"
	Sep 14 22:33:49 pause-354420 kubelet[3293]: I0914 22:33:49.104326    3293 scope.go:117] "RemoveContainer" containerID="c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d"
	Sep 14 22:33:49 pause-354420 kubelet[3293]: I0914 22:33:49.104886    3293 scope.go:117] "RemoveContainer" containerID="51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-354420 -n pause-354420
helpers_test.go:261: (dbg) Run:  kubectl --context pause-354420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-354420 -n pause-354420
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-354420 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-354420 logs -n 25: (1.19777322s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC | 14 Sep 23 22:30 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC | 14 Sep 23 22:30 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-997589       | scheduled-stop-997589     | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC | 14 Sep 23 22:31 UTC |
	| start   | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-948115         | offline-crio-948115       | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC | 14 Sep 23 22:33 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-354420 --memory=2048  | pause-354420              | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC | 14 Sep 23 22:33 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC | 14 Sep 23 22:33 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:33 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-354420                | pause-354420              | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:34 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:33 UTC |
	| start   | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:33 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-948115         | offline-crio-948115       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:33 UTC |
	| start   | -p kubernetes-upgrade-711912   | kubernetes-upgrade-711912 | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-995756      | running-upgrade-995756    | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-982498 sudo    | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC | 14 Sep 23 22:33 UTC |
	| start   | -p NoKubernetes-982498         | NoKubernetes-982498       | jenkins | v1.31.2 | 14 Sep 23 22:33 UTC |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:33:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:33:51.688445   38243 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:33:51.688748   38243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:33:51.688753   38243 out.go:309] Setting ErrFile to fd 2...
	I0914 22:33:51.688758   38243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:33:51.689029   38243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:33:51.689687   38243 out.go:303] Setting JSON to false
	I0914 22:33:51.690937   38243 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4574,"bootTime":1694726258,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:33:51.691007   38243 start.go:138] virtualization: kvm guest
	I0914 22:33:51.693268   38243 out.go:177] * [NoKubernetes-982498] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:33:51.694984   38243 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:33:51.695036   38243 notify.go:220] Checking for updates...
	I0914 22:33:51.696741   38243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:33:51.698423   38243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:33:51.700241   38243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:33:51.701701   38243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:33:51.703192   38243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:33:51.705168   38243 config.go:182] Loaded profile config "NoKubernetes-982498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0914 22:33:51.705584   38243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:33:51.705638   38243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:33:51.720694   38243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I0914 22:33:51.721124   38243 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:33:51.722043   38243 main.go:141] libmachine: Using API Version  1
	I0914 22:33:51.722058   38243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:33:51.723439   38243 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:33:51.723834   38243 main.go:141] libmachine: (NoKubernetes-982498) Calling .DriverName
	I0914 22:33:51.724058   38243 start.go:1720] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0914 22:33:51.724079   38243 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:33:51.724410   38243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:33:51.724439   38243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:33:51.740985   38243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45453
	I0914 22:33:51.741424   38243 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:33:51.741897   38243 main.go:141] libmachine: Using API Version  1
	I0914 22:33:51.741911   38243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:33:51.742250   38243 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:33:51.742464   38243 main.go:141] libmachine: (NoKubernetes-982498) Calling .DriverName
	I0914 22:33:51.780240   38243 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:33:51.781734   38243 start.go:298] selected driver: kvm2
	I0914 22:33:51.781739   38243 start.go:902] validating driver "kvm2" against &{Name:NoKubernetes-982498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-982498 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.168 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:33:51.781827   38243 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:33:51.782114   38243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:51.782189   38243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:33:51.797132   38243 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:33:51.797858   38243 cni.go:84] Creating CNI manager for ""
	I0914 22:33:51.797871   38243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:33:51.797883   38243 start_flags.go:321] config:
	{Name:NoKubernetes-982498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-982498 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.168 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:33:51.798015   38243 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:33:51.800027   38243 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-982498
	I0914 22:33:48.150767   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:48.151247   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:48.151278   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:48.151203   37979 retry.go:31] will retry after 719.072833ms: waiting for machine to come up
	I0914 22:33:48.871415   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:48.871936   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:48.871962   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:48.871914   37979 retry.go:31] will retry after 1.25318085s: waiting for machine to come up
	I0914 22:33:50.126396   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:50.126889   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:50.126920   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:50.126850   37979 retry.go:31] will retry after 1.801046185s: waiting for machine to come up
	I0914 22:33:51.929241   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:51.929779   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:51.929808   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:51.929733   37979 retry.go:31] will retry after 2.070618875s: waiting for machine to come up
	I0914 22:33:49.880212   37436 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:33:49.891355   37436 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:33:49.912114   37436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:33:49.928157   37436 system_pods.go:59] 6 kube-system pods found
	I0914 22:33:49.928217   37436 system_pods.go:61] "coredns-5dd5756b68-6q49n" [b0fa85bd-c439-4a5a-9e2a-552faa59e3c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:33:49.928241   37436 system_pods.go:61] "etcd-pause-354420" [1378a8dc-5ec6-405e-ab63-77299387c832] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:33:49.928268   37436 system_pods.go:61] "kube-apiserver-pause-354420" [7bb1b6a6-5647-4ae6-a1f1-e022bdd395e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:33:49.928289   37436 system_pods.go:61] "kube-controller-manager-pause-354420" [78b07d77-93ca-4961-b103-1ea6754fd60d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:33:49.928307   37436 system_pods.go:61] "kube-proxy-fzt4z" [cba0aa8a-8a13-414c-8d84-7de5a8f6b945] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:33:49.928325   37436 system_pods.go:61] "kube-scheduler-pause-354420" [1752fa56-321d-41a0-b1a2-798db32e0b95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:33:49.928341   37436 system_pods.go:74] duration metric: took 16.21046ms to wait for pod list to return data ...
	I0914 22:33:49.928358   37436 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:33:49.931929   37436 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:33:49.931962   37436 node_conditions.go:123] node cpu capacity is 2
	I0914 22:33:49.931975   37436 node_conditions.go:105] duration metric: took 3.603904ms to run NodePressure ...
	I0914 22:33:49.931994   37436 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:33:50.177529   37436 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:33:50.182885   37436 kubeadm.go:787] kubelet initialised
	I0914 22:33:50.182910   37436 kubeadm.go:788] duration metric: took 5.350492ms waiting for restarted kubelet to initialise ...
	I0914 22:33:50.182920   37436 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:33:50.187974   37436 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace to be "Ready" ...
	I0914 22:33:50.714445   37436 pod_ready.go:92] pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace has status "Ready":"True"
	I0914 22:33:50.714473   37436 pod_ready.go:81] duration metric: took 526.47739ms waiting for pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace to be "Ready" ...
	I0914 22:33:50.714485   37436 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:33:52.735854   37436 pod_ready.go:102] pod "etcd-pause-354420" in "kube-system" namespace has status "Ready":"False"
	I0914 22:33:51.801491   38243 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0914 22:33:52.208916   38243 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0914 22:33:52.209044   38243 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/NoKubernetes-982498/config.json ...
	I0914 22:33:52.209321   38243 start.go:365] acquiring machines lock for NoKubernetes-982498: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:33:54.001647   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:54.002264   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:54.002295   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:54.002214   37979 retry.go:31] will retry after 2.543119841s: waiting for machine to come up
	I0914 22:33:56.548671   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:56.549231   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:56.549272   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:56.549198   37979 retry.go:31] will retry after 2.444315254s: waiting for machine to come up
	I0914 22:33:55.236233   37436 pod_ready.go:102] pod "etcd-pause-354420" in "kube-system" namespace has status "Ready":"False"
	I0914 22:33:57.234406   37436 pod_ready.go:92] pod "etcd-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:33:57.234431   37436 pod_ready.go:81] duration metric: took 6.519938592s waiting for pod "etcd-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:33:57.234440   37436 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:33:59.252328   37436 pod_ready.go:102] pod "kube-apiserver-pause-354420" in "kube-system" namespace has status "Ready":"False"
	I0914 22:34:00.757125   37436 pod_ready.go:92] pod "kube-apiserver-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:00.757152   37436 pod_ready.go:81] duration metric: took 3.522704525s waiting for pod "kube-apiserver-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.757170   37436 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.762482   37436 pod_ready.go:92] pod "kube-controller-manager-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:00.762505   37436 pod_ready.go:81] duration metric: took 5.327268ms waiting for pod "kube-controller-manager-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.762514   37436 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fzt4z" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.769045   37436 pod_ready.go:92] pod "kube-proxy-fzt4z" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:00.769063   37436 pod_ready.go:81] duration metric: took 6.543741ms waiting for pod "kube-proxy-fzt4z" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.769071   37436 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.774138   37436 pod_ready.go:92] pod "kube-scheduler-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:00.774162   37436 pod_ready.go:81] duration metric: took 5.08424ms waiting for pod "kube-scheduler-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:00.774173   37436 pod_ready.go:38] duration metric: took 10.591241383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:34:00.774195   37436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:34:00.789845   37436 ops.go:34] apiserver oom_adj: -16
	I0914 22:34:00.789864   37436 kubeadm.go:640] restartCluster took 35.135131308s
	I0914 22:34:00.789873   37436 kubeadm.go:406] StartCluster complete in 35.356557623s
	I0914 22:34:00.789891   37436 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:34:00.789979   37436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:34:00.790681   37436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:34:00.790898   37436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:34:00.791044   37436 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:34:00.791146   37436 config.go:182] Loaded profile config "pause-354420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:34:00.792935   37436 out.go:177] * Enabled addons: 
	I0914 22:34:00.791452   37436 kapi.go:59] client config for pause-354420: &rest.Config{Host:"https://192.168.39.45:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/profiles/pause-354420/client.key", CAFile:"/home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:34:00.794450   37436 addons.go:502] enable addons completed in 3.413911ms: enabled=[]
	I0914 22:34:00.798282   37436 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-354420" context rescaled to 1 replicas
	I0914 22:34:00.798310   37436 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:34:00.799853   37436 out.go:177] * Verifying Kubernetes components...
	I0914 22:33:58.995303   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:33:58.995778   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:33:58.995836   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:33:58.995711   37979 retry.go:31] will retry after 3.712127836s: waiting for machine to come up
	I0914 22:34:02.712396   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | domain kubernetes-upgrade-711912 has defined MAC address 52:54:00:4e:38:64 in network mk-kubernetes-upgrade-711912
	I0914 22:34:02.713003   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | unable to find current IP address of domain kubernetes-upgrade-711912 in network mk-kubernetes-upgrade-711912
	I0914 22:34:02.713034   37874 main.go:141] libmachine: (kubernetes-upgrade-711912) DBG | I0914 22:34:02.712949   37979 retry.go:31] will retry after 4.412404699s: waiting for machine to come up
	I0914 22:34:00.801237   37436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:34:00.912310   37436 node_ready.go:35] waiting up to 6m0s for node "pause-354420" to be "Ready" ...
	I0914 22:34:00.912323   37436 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:34:00.915176   37436 node_ready.go:49] node "pause-354420" has status "Ready":"True"
	I0914 22:34:00.915191   37436 node_ready.go:38] duration metric: took 2.849627ms waiting for node "pause-354420" to be "Ready" ...
	I0914 22:34:00.915199   37436 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:34:01.118971   37436 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:01.516913   37436 pod_ready.go:92] pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:01.516940   37436 pod_ready.go:81] duration metric: took 397.941577ms waiting for pod "coredns-5dd5756b68-6q49n" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:01.516953   37436 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:01.916702   37436 pod_ready.go:92] pod "etcd-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:01.916727   37436 pod_ready.go:81] duration metric: took 399.767579ms waiting for pod "etcd-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:01.916738   37436 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:02.317161   37436 pod_ready.go:92] pod "kube-apiserver-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:02.317185   37436 pod_ready.go:81] duration metric: took 400.439277ms waiting for pod "kube-apiserver-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:02.317197   37436 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:02.716542   37436 pod_ready.go:92] pod "kube-controller-manager-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:02.716563   37436 pod_ready.go:81] duration metric: took 399.358595ms waiting for pod "kube-controller-manager-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:02.716572   37436 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fzt4z" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:03.115713   37436 pod_ready.go:92] pod "kube-proxy-fzt4z" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:03.115738   37436 pod_ready.go:81] duration metric: took 399.159745ms waiting for pod "kube-proxy-fzt4z" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:03.115747   37436 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:03.517207   37436 pod_ready.go:92] pod "kube-scheduler-pause-354420" in "kube-system" namespace has status "Ready":"True"
	I0914 22:34:03.517234   37436 pod_ready.go:81] duration metric: took 401.479188ms waiting for pod "kube-scheduler-pause-354420" in "kube-system" namespace to be "Ready" ...
	I0914 22:34:03.517257   37436 pod_ready.go:38] duration metric: took 2.60204941s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:34:03.517277   37436 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:34:03.517336   37436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:34:03.530335   37436 api_server.go:72] duration metric: took 2.732004295s to wait for apiserver process to appear ...
	I0914 22:34:03.530362   37436 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:34:03.530394   37436 api_server.go:253] Checking apiserver healthz at https://192.168.39.45:8443/healthz ...
	I0914 22:34:03.535289   37436 api_server.go:279] https://192.168.39.45:8443/healthz returned 200:
	ok
	I0914 22:34:03.536480   37436 api_server.go:141] control plane version: v1.28.1
	I0914 22:34:03.536506   37436 api_server.go:131] duration metric: took 6.136853ms to wait for apiserver health ...
	I0914 22:34:03.536516   37436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:34:03.721078   37436 system_pods.go:59] 6 kube-system pods found
	I0914 22:34:03.721114   37436 system_pods.go:61] "coredns-5dd5756b68-6q49n" [b0fa85bd-c439-4a5a-9e2a-552faa59e3c0] Running
	I0914 22:34:03.721122   37436 system_pods.go:61] "etcd-pause-354420" [1378a8dc-5ec6-405e-ab63-77299387c832] Running
	I0914 22:34:03.721129   37436 system_pods.go:61] "kube-apiserver-pause-354420" [7bb1b6a6-5647-4ae6-a1f1-e022bdd395e0] Running
	I0914 22:34:03.721136   37436 system_pods.go:61] "kube-controller-manager-pause-354420" [78b07d77-93ca-4961-b103-1ea6754fd60d] Running
	I0914 22:34:03.721143   37436 system_pods.go:61] "kube-proxy-fzt4z" [cba0aa8a-8a13-414c-8d84-7de5a8f6b945] Running
	I0914 22:34:03.721149   37436 system_pods.go:61] "kube-scheduler-pause-354420" [1752fa56-321d-41a0-b1a2-798db32e0b95] Running
	I0914 22:34:03.721156   37436 system_pods.go:74] duration metric: took 184.634034ms to wait for pod list to return data ...
	I0914 22:34:03.721164   37436 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:34:03.916296   37436 default_sa.go:45] found service account: "default"
	I0914 22:34:03.916330   37436 default_sa.go:55] duration metric: took 195.15841ms for default service account to be created ...
	I0914 22:34:03.916341   37436 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:34:04.121966   37436 system_pods.go:86] 6 kube-system pods found
	I0914 22:34:04.122000   37436 system_pods.go:89] "coredns-5dd5756b68-6q49n" [b0fa85bd-c439-4a5a-9e2a-552faa59e3c0] Running
	I0914 22:34:04.122009   37436 system_pods.go:89] "etcd-pause-354420" [1378a8dc-5ec6-405e-ab63-77299387c832] Running
	I0914 22:34:04.122016   37436 system_pods.go:89] "kube-apiserver-pause-354420" [7bb1b6a6-5647-4ae6-a1f1-e022bdd395e0] Running
	I0914 22:34:04.122023   37436 system_pods.go:89] "kube-controller-manager-pause-354420" [78b07d77-93ca-4961-b103-1ea6754fd60d] Running
	I0914 22:34:04.122031   37436 system_pods.go:89] "kube-proxy-fzt4z" [cba0aa8a-8a13-414c-8d84-7de5a8f6b945] Running
	I0914 22:34:04.122036   37436 system_pods.go:89] "kube-scheduler-pause-354420" [1752fa56-321d-41a0-b1a2-798db32e0b95] Running
	I0914 22:34:04.122045   37436 system_pods.go:126] duration metric: took 205.698752ms to wait for k8s-apps to be running ...
	I0914 22:34:04.122061   37436 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:34:04.122112   37436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:34:04.136919   37436 system_svc.go:56] duration metric: took 14.85116ms WaitForService to wait for kubelet.
	I0914 22:34:04.136946   37436 kubeadm.go:581] duration metric: took 3.338618818s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:34:04.136969   37436 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:34:04.315762   37436 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:34:04.315786   37436 node_conditions.go:123] node cpu capacity is 2
	I0914 22:34:04.315797   37436 node_conditions.go:105] duration metric: took 178.823037ms to run NodePressure ...
	I0914 22:34:04.315811   37436 start.go:228] waiting for startup goroutines ...
	I0914 22:34:04.315817   37436 start.go:233] waiting for cluster config update ...
	I0914 22:34:04.315823   37436 start.go:242] writing updated cluster config ...
	I0914 22:34:04.316125   37436 ssh_runner.go:195] Run: rm -f paused
	I0914 22:34:04.362656   37436 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:34:04.364723   37436 out.go:177] * Done! kubectl is now configured to use "pause-354420" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:31:34 UTC, ends at Thu 2023-09-14 22:34:06 UTC. --
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.116600907Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-6q49n,Uid:b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803885719605,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:32:30.793568291Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-354420,Uid:297350e03cbdb80c3730eb7ffa543bdc,Namespace:kube-system,
Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803855433045,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 297350e03cbdb80c3730eb7ffa543bdc,kubernetes.io/config.seen: 2023-09-14T22:32:15.198541757Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-354420,Uid:d01539766913262579c92cafe9e2828a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803829877985,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d015397669132
62579c92cafe9e2828a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d01539766913262579c92cafe9e2828a,kubernetes.io/config.seen: 2023-09-14T22:32:15.198540739Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-354420,Uid:ed4793bb373a24bab68281d3d96396f0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803817564369,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96396f0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.45:8443,kubernetes.io/config.hash: ed4793bb373a24bab68281d3d96396f0,kubernetes.io/config.seen: 2023-09-14T22:32:15.198539413Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&PodSandboxMetadata{Name:kube-proxy-fzt4z,Uid:cba0aa8a-8a13-414c-8d84-7de5a8f6b945,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803774415968,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:32:30.535986605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&PodSandboxMetadata{Name:etcd-pause-354420,Uid:9dead96b6951fd34fa6c7770070f6de9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694730803707586120,Labels:map[string]string{component: etcd,io.kubernetes.contain
er.name: POD,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.45:2379,kubernetes.io/config.hash: 9dead96b6951fd34fa6c7770070f6de9,kubernetes.io/config.seen: 2023-09-14T22:32:15.198534529Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-354420,Uid:d01539766913262579c92cafe9e2828a,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1694730800751415274,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,tier: control-plane,},Annotations:map[string
]string{kubernetes.io/config.hash: d01539766913262579c92cafe9e2828a,kubernetes.io/config.seen: 2023-09-14T22:32:15.198540739Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-354420,Uid:ed4793bb373a24bab68281d3d96396f0,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1694730800746145005,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96396f0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.45:8443,kubernetes.io/config.hash: ed4793bb373a24bab68281d3d96396f0,kubernetes.io/config.seen: 2023-09-14T22:32:15.198539413Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.
go:25" id=c38a12bf-b4c7-4126-b8b1-907000beef6e name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.117508779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a329f31a-3c11-43cd-b964-b4bdf7ef9257 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.117576321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a329f31a-3c11-43cd-b964-b4bdf7ef9257 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.117908597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a329f31a-3c11-43cd-b964-b4bdf7ef9257 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.586905034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b63f9d77-932b-405c-a79e-2aec0e7e3951 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.587014987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b63f9d77-932b-405c-a79e-2aec0e7e3951 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.587442845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b63f9d77-932b-405c-a79e-2aec0e7e3951 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.626829407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8d0faa00-55d0-4552-a10e-018d354625b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.626943094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8d0faa00-55d0-4552-a10e-018d354625b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.627364716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8d0faa00-55d0-4552-a10e-018d354625b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.667054771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=82d44e3f-94ec-44d0-862a-1de163223865 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.667219535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=82d44e3f-94ec-44d0-862a-1de163223865 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.667556134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=82d44e3f-94ec-44d0-862a-1de163223865 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.705387337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c9c38ae6-9262-4836-a7ec-745806bbf663 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.705499463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c9c38ae6-9262-4836-a7ec-745806bbf663 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.705818667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c9c38ae6-9262-4836-a7ec-745806bbf663 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.748641991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7e9d44a3-2951-4313-a06c-6518cfc83fbf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.748726275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7e9d44a3-2951-4313-a06c-6518cfc83fbf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.749195161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7e9d44a3-2951-4313-a06c-6518cfc83fbf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.797164607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0281da33-5508-4171-9c32-c2439e5b82cb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.797249235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0281da33-5508-4171-9c32-c2439e5b82cb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.797485347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0281da33-5508-4171-9c32-c2439e5b82cb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.830375271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f23035ad-68c0-4525-8378-e9489912ce57 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.830456725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f23035ad-68c0-4525-8378-e9489912ce57 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 22:34:06 pause-354420 crio[2468]: time="2023-09-14 22:34:06.830776332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694730829138634712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694730829145851715,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash: cc6af761,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406,PodSandboxId:e208b25b7dab09d52177c5827b2bcf2f1193e464334a6e02db91bb28946c1be5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694730823586042297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530,PodSandboxId:907db6fd00e29928614a45410ea09fdecb832178f9b602291752f4edd7ec0107,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694730823544044593,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ed4793bb373a24bab68281d3d96396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694730823517450381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297
350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694730823492181580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91,PodSandboxId:4f085cf201ab58d4e512feb101861e440956824f4b78f70566c2650e558c7cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694730806569389620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzt4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba0aa8a-8a13-414c-8d84-7de5a8f6b945,},Annotations:map[string]string{io.kubernetes.container.hash:
cc6af761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d,PodSandboxId:cba77754f8aa461b80d14933d84239cfbec76132d82799acec369c755f348375,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694730805593746009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6q49n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0fa85bd-c439-4a5a-9e2a-552faa59e3c0,},Annotations:map[string]string{io.kubernetes.container.hash: d4f23e36,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed,PodSandboxId:380c798a07f957b1d8838838ba31f3f9535032c8dcc36c5c18214d6cb4f05a8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694730805291153244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297350e03cbdb80c3730eb7ffa543bdc,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30,PodSandboxId:e4a8fecbd6a9e473d98652598eda9d989cb5a67f482fce8583f5462280b60204,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694730804771696954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-354420,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 9dead96b6951fd34fa6c7770070f6de9,},Annotations:map[string]string{io.kubernetes.container.hash: 449e8d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071,PodSandboxId:c3cf145a2a4a8b64ae9f2c16a02e80e23cb57d220c0e990887eea50f0a3565cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694730802211770271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed4793bb373a24bab68281d3d96
396f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4f7c9572,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994,PodSandboxId:fc8d450e9e756223e9c0cb55758abee68e4945e1a37f690e6ac29457b7c286b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694730801657877297,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-354420,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d01539766913262579c92cafe9e2828a,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f23035ad-68c0-4525-8378-e9489912ce57 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	549c4bcbd5a5a       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   17 seconds ago      Running             kube-proxy                2                   4f085cf201ab5
	50d599bcafec5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   17 seconds ago      Running             coredns                   2                   cba77754f8aa4
	248173e88a97f       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   23 seconds ago      Running             kube-controller-manager   2                   e208b25b7dab0
	fcf27877f3f6e       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   23 seconds ago      Running             kube-apiserver            2                   907db6fd00e29
	05536c50916c5       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   23 seconds ago      Running             kube-scheduler            2                   380c798a07f95
	40bdcfdd5a6b6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   23 seconds ago      Running             etcd                      2                   e4a8fecbd6a9e
	51c99507fd796       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   40 seconds ago      Exited              kube-proxy                1                   4f085cf201ab5
	c3dea3ffbbb69       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   41 seconds ago      Exited              coredns                   1                   cba77754f8aa4
	8c3f0e0d0a40e       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   41 seconds ago      Exited              kube-scheduler            1                   380c798a07f95
	a8a309a10f0b1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   42 seconds ago      Exited              etcd                      1                   e4a8fecbd6a9e
	c3bb34a9679cf       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   44 seconds ago      Exited              kube-apiserver            1                   c3cf145a2a4a8
	f070228f60de6       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   45 seconds ago      Exited              kube-controller-manager   1                   fc8d450e9e756
	
	* 
	* ==> coredns [50d599bcafec57f61b9a0396638fc0b2ea3062d22ce65f54580ce62639cb9d5f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38275 - 9579 "HINFO IN 1798067197754382644.147329406833308390. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009778785s
	
	* 
	* ==> coredns [c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45168 - 57216 "HINFO IN 7826584772612106630.2764886595922553763. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017483474s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-354420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-354420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=pause-354420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_32_15_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:32:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-354420
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:33:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:33:48 +0000   Thu, 14 Sep 2023 22:32:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:33:48 +0000   Thu, 14 Sep 2023 22:32:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:33:48 +0000   Thu, 14 Sep 2023 22:32:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:33:48 +0000   Thu, 14 Sep 2023 22:32:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    pause-354420
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 efceada1c2ec4e6bac3ac597d710f28f
	  System UUID:                efceada1-c2ec-4e6b-ac3a-c597d710f28f
	  Boot ID:                    fe479ebf-91e2-45ca-a6a2-6572d134f3ac
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-6q49n                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     97s
	  kube-system                 etcd-pause-354420                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         112s
	  kube-system                 kube-apiserver-pause-354420             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-controller-manager-pause-354420    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-proxy-fzt4z                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-scheduler-pause-354420             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 94s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node pause-354420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node pause-354420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)  kubelet          Node pause-354420 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node pause-354420 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node pause-354420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node pause-354420 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                112s                 kubelet          Node pause-354420 status is now: NodeReady
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           100s                 node-controller  Node pause-354420 event: Registered Node pause-354420 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)    kubelet          Node pause-354420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)    kubelet          Node pause-354420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 25s)    kubelet          Node pause-354420 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                   node-controller  Node pause-354420 event: Registered Node pause-354420 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063737] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.303676] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.013408] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135455] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.961109] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +14.015268] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.123952] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.155931] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.116630] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.224445] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[Sep14 22:32] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[ +11.317298] systemd-fstab-generator[1267]: Ignoring "noauto" for root device
	[Sep14 22:33] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.347195] systemd-fstab-generator[2064]: Ignoring "noauto" for root device
	[  +0.504164] systemd-fstab-generator[2237]: Ignoring "noauto" for root device
	[  +0.336216] systemd-fstab-generator[2257]: Ignoring "noauto" for root device
	[  +0.237590] systemd-fstab-generator[2273]: Ignoring "noauto" for root device
	[  +0.483914] systemd-fstab-generator[2320]: Ignoring "noauto" for root device
	[ +20.453566] systemd-fstab-generator[3287]: Ignoring "noauto" for root device
	[  +7.183628] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [40bdcfdd5a6b64106510962d6e35cfb16bad20206c716b04a61916724f5d451e] <==
	* {"level":"info","ts":"2023-09-14T22:33:45.997544Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:33:45.997573Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:33:45.997582Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:33:45.997695Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.45:2380"}
	{"level":"info","ts":"2023-09-14T22:33:45.997703Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.45:2380"}
	{"level":"info","ts":"2023-09-14T22:33:46.266876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:46.266971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:46.267008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce received MsgPreVoteResp from d386e7203fab19ce at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:46.267025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became candidate at term 4"}
	{"level":"info","ts":"2023-09-14T22:33:46.267034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce received MsgVoteResp from d386e7203fab19ce at term 4"}
	{"level":"info","ts":"2023-09-14T22:33:46.267047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became leader at term 4"}
	{"level":"info","ts":"2023-09-14T22:33:46.267057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d386e7203fab19ce elected leader d386e7203fab19ce at term 4"}
	{"level":"info","ts":"2023-09-14T22:33:46.274938Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d386e7203fab19ce","local-member-attributes":"{Name:pause-354420 ClientURLs:[https://192.168.39.45:2379]}","request-path":"/0/members/d386e7203fab19ce/attributes","cluster-id":"34c61d36ecc5c83e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:33:46.276198Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:33:46.277541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.45:2379"}
	{"level":"info","ts":"2023-09-14T22:33:46.277635Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:33:46.28172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:33:46.282208Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:33:46.28226Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T22:33:56.024931Z","caller":"traceutil/trace.go:171","msg":"trace[245118925] linearizableReadLoop","detail":"{readStateIndex:533; appliedIndex:532; }","duration":"299.753277ms","start":"2023-09-14T22:33:55.725164Z","end":"2023-09-14T22:33:56.024917Z","steps":["trace[245118925] 'read index received'  (duration: 299.566021ms)","trace[245118925] 'applied index is now lower than readState.Index'  (duration: 186.691µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T22:33:56.025132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.982276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-354420\" ","response":"range_response_count:1 size:5468"}
	{"level":"info","ts":"2023-09-14T22:33:56.025179Z","caller":"traceutil/trace.go:171","msg":"trace[812360223] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-354420; range_end:; response_count:1; response_revision:489; }","duration":"300.092267ms","start":"2023-09-14T22:33:55.725076Z","end":"2023-09-14T22:33:56.025168Z","steps":["trace[812360223] 'agreement among raft nodes before linearized reading'  (duration: 299.918663ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:33:56.025203Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T22:33:55.725065Z","time spent":"300.132021ms","remote":"127.0.0.1:38550","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5491,"request content":"key:\"/registry/pods/kube-system/etcd-pause-354420\" "}
	{"level":"info","ts":"2023-09-14T22:33:56.025401Z","caller":"traceutil/trace.go:171","msg":"trace[1213416699] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"301.520337ms","start":"2023-09-14T22:33:55.723863Z","end":"2023-09-14T22:33:56.025383Z","steps":["trace[1213416699] 'process raft request'  (duration: 300.91771ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:33:56.026014Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T22:33:55.723846Z","time spent":"301.742469ms","remote":"127.0.0.1:38550","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6422,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-354420\" mod_revision:437 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-354420\" value_size:6351 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-354420\" > >"}
	
	* 
	* ==> etcd [a8a309a10f0b1f39f67e5461ab7891c4a75ffaada4d2b911ac84b44642ab5d30] <==
	* {"level":"info","ts":"2023-09-14T22:33:26.297561Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T22:33:28.170581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-14T22:33:28.170722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-14T22:33:28.170774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce received MsgPreVoteResp from d386e7203fab19ce at term 2"}
	{"level":"info","ts":"2023-09-14T22:33:28.170813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:28.170845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce received MsgVoteResp from d386e7203fab19ce at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:28.170904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became leader at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:28.170937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d386e7203fab19ce elected leader d386e7203fab19ce at term 3"}
	{"level":"info","ts":"2023-09-14T22:33:28.17787Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d386e7203fab19ce","local-member-attributes":"{Name:pause-354420 ClientURLs:[https://192.168.39.45:2379]}","request-path":"/0/members/d386e7203fab19ce/attributes","cluster-id":"34c61d36ecc5c83e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:33:28.177956Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:33:28.178271Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:33:28.178324Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T22:33:28.17835Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:33:28.179476Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:33:28.179934Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.45:2379"}
	{"level":"info","ts":"2023-09-14T22:33:41.226227Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-14T22:33:41.226321Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-354420","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.45:2380"],"advertise-client-urls":["https://192.168.39.45:2379"]}
	{"level":"warn","ts":"2023-09-14T22:33:41.226454Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:33:41.226523Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:33:41.228857Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.45:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T22:33:41.229563Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.45:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-14T22:33:41.230721Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d386e7203fab19ce","current-leader-member-id":"d386e7203fab19ce"}
	{"level":"info","ts":"2023-09-14T22:33:41.235719Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.45:2380"}
	{"level":"info","ts":"2023-09-14T22:33:41.236024Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.45:2380"}
	{"level":"info","ts":"2023-09-14T22:33:41.236186Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-354420","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.45:2380"],"advertise-client-urls":["https://192.168.39.45:2379"]}
	
	* 
	* ==> kernel <==
	*  22:34:07 up 2 min,  0 users,  load average: 1.39, 0.64, 0.24
	Linux pause-354420 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c3bb34a9679cf4e90ae9fc02ba48493b8996e0cf856875635e617b0a79ab6071] <==
	* 
	* 
	* ==> kube-apiserver [fcf27877f3f6e417f6e7d57e1ee7e6b50633b09d38ff903acc0463d411810530] <==
	* I0914 22:33:48.251850       1 establishing_controller.go:76] Starting EstablishingController
	I0914 22:33:48.251872       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0914 22:33:48.251885       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0914 22:33:48.251899       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0914 22:33:48.410845       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 22:33:48.433675       1 shared_informer.go:318] Caches are synced for configmaps
	I0914 22:33:48.434037       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 22:33:48.442958       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0914 22:33:48.443000       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 22:33:48.447699       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 22:33:48.447817       1 aggregator.go:166] initial CRD sync complete...
	I0914 22:33:48.447854       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 22:33:48.447862       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 22:33:48.447868       1 cache.go:39] Caches are synced for autoregister controller
	I0914 22:33:48.452601       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0914 22:33:48.460651       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 22:33:48.512480       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 22:33:49.222889       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 22:33:50.037776       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 22:33:50.072288       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 22:33:50.122472       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 22:33:50.151577       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 22:33:50.164414       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 22:34:00.658341       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 22:34:00.731078       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [248173e88a97fb588e3ba93fd0d7a23f6a4658825e0669363abab3bf3c91d406] <==
	* I0914 22:34:00.637584       1 taint_manager.go:211] "Sending events to api server"
	I0914 22:34:00.637634       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0914 22:34:00.638406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="395.987µs"
	I0914 22:34:00.638458       1 shared_informer.go:318] Caches are synced for cronjob
	I0914 22:34:00.638489       1 shared_informer.go:318] Caches are synced for deployment
	I0914 22:34:00.638783       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-354420"
	I0914 22:34:00.638879       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0914 22:34:00.639713       1 event.go:307] "Event occurred" object="pause-354420" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-354420 event: Registered Node pause-354420 in Controller"
	I0914 22:34:00.640469       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0914 22:34:00.640652       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0914 22:34:00.643608       1 shared_informer.go:318] Caches are synced for PV protection
	I0914 22:34:00.643796       1 shared_informer.go:318] Caches are synced for TTL
	I0914 22:34:00.652634       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 22:34:00.654183       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 22:34:00.654845       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 22:34:00.655665       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 22:34:00.698201       1 shared_informer.go:318] Caches are synced for endpoint
	I0914 22:34:00.708339       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0914 22:34:00.752800       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 22:34:00.752899       1 shared_informer.go:318] Caches are synced for HPA
	I0914 22:34:00.781324       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 22:34:00.801789       1 shared_informer.go:318] Caches are synced for crt configmap
	I0914 22:34:01.217724       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 22:34:01.217863       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0914 22:34:01.241302       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [f070228f60de6687dd612d0329a50523321466f382f547681681d3e836745994] <==
	* 
	* 
	* ==> kube-proxy [51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91] <==
	* I0914 22:33:26.865426       1 server_others.go:69] "Using iptables proxy"
	E0914 22:33:26.868512       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-354420": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:28.002026       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-354420": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:30.276174       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-354420": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:34.685243       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-354420": dial tcp 192.168.39.45:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [549c4bcbd5a5ac0898999406f0c1acd5e6b7ee956a4eab4dedf8b20f00f15d9a] <==
	* I0914 22:33:49.433724       1 server_others.go:69] "Using iptables proxy"
	I0914 22:33:49.452412       1 node.go:141] Successfully retrieved node IP: 192.168.39.45
	I0914 22:33:49.524249       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:33:49.524309       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:33:49.533679       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:33:49.533783       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:33:49.534025       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:33:49.534041       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:33:49.536177       1 config.go:188] "Starting service config controller"
	I0914 22:33:49.536243       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:33:49.536278       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:33:49.536284       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:33:49.536856       1 config.go:315] "Starting node config controller"
	I0914 22:33:49.536865       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:33:49.637385       1 shared_informer.go:318] Caches are synced for node config
	I0914 22:33:49.637430       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:33:49.637461       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [05536c50916c5095dcfe15b27a47846e646a9e5e90ff425df5f8e2c177de9287] <==
	* I0914 22:33:46.072359       1 serving.go:348] Generated self-signed cert in-memory
	W0914 22:33:48.304007       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 22:33:48.304151       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:33:48.304164       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:33:48.304281       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:33:48.378780       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:33:48.378832       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:33:48.387289       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:33:48.387673       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:33:48.387754       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:33:48.387773       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:33:48.488971       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [8c3f0e0d0a40ecf00f3f628e5581865377a3e7b61d5b9b5f253d7ad539f139ed] <==
	* E0914 22:33:35.141353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.45:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:35.735936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.45:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:35.736182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.45:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:35.966652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.45:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:35.966708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.45:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.242782       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.45:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.242904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.45:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.518650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.45:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.518707       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.45:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.525304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.45:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.525367       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.45:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.558848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.45:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.558942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.45:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.720727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.45:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.720786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.45:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:36.904783       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.45:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:36.904875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.45:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:37.464302       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.45:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:37.464404       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.45:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	W0914 22:33:37.576017       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.45:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:37.576078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.45:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	E0914 22:33:41.043589       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0914 22:33:41.044778       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0914 22:33:41.044885       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0914 22:33:41.045005       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:31:34 UTC, ends at Thu 2023-09-14 22:34:07 UTC. --
	Sep 14 22:33:43 pause-354420 kubelet[3293]: E0914 22:33:43.561185    3293 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.45:8443: connect: connection refused" node="pause-354420"
	Sep 14 22:33:43 pause-354420 kubelet[3293]: W0914 22:33:43.687411    3293 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:43 pause-354420 kubelet[3293]: E0914 22:33:43.687499    3293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:43 pause-354420 kubelet[3293]: W0914 22:33:43.935303    3293 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:43 pause-354420 kubelet[3293]: E0914 22:33:43.935414    3293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:43 pause-354420 kubelet[3293]: W0914 22:33:43.948296    3293 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-354420&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:43 pause-354420 kubelet[3293]: E0914 22:33:43.948375    3293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-354420&limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:44 pause-354420 kubelet[3293]: E0914 22:33:44.247844    3293 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-354420?timeout=10s\": dial tcp 192.168.39.45:8443: connect: connection refused" interval="1.6s"
	Sep 14 22:33:44 pause-354420 kubelet[3293]: W0914 22:33:44.341711    3293 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:44 pause-354420 kubelet[3293]: E0914 22:33:44.341835    3293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.45:8443: connect: connection refused
	Sep 14 22:33:44 pause-354420 kubelet[3293]: I0914 22:33:44.363463    3293 kubelet_node_status.go:70] "Attempting to register node" node="pause-354420"
	Sep 14 22:33:44 pause-354420 kubelet[3293]: E0914 22:33:44.364164    3293 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.45:8443: connect: connection refused" node="pause-354420"
	Sep 14 22:33:45 pause-354420 kubelet[3293]: I0914 22:33:45.966272    3293 kubelet_node_status.go:70] "Attempting to register node" node="pause-354420"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.486049    3293 kubelet_node_status.go:108] "Node was previously registered" node="pause-354420"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.486229    3293 kubelet_node_status.go:73] "Successfully registered node" node="pause-354420"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.488760    3293 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.489966    3293 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.798898    3293 apiserver.go:52] "Watching apiserver"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.802775    3293 topology_manager.go:215] "Topology Admit Handler" podUID="cba0aa8a-8a13-414c-8d84-7de5a8f6b945" podNamespace="kube-system" podName="kube-proxy-fzt4z"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.802965    3293 topology_manager.go:215] "Topology Admit Handler" podUID="b0fa85bd-c439-4a5a-9e2a-552faa59e3c0" podNamespace="kube-system" podName="coredns-5dd5756b68-6q49n"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.840935    3293 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.927338    3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cba0aa8a-8a13-414c-8d84-7de5a8f6b945-xtables-lock\") pod \"kube-proxy-fzt4z\" (UID: \"cba0aa8a-8a13-414c-8d84-7de5a8f6b945\") " pod="kube-system/kube-proxy-fzt4z"
	Sep 14 22:33:48 pause-354420 kubelet[3293]: I0914 22:33:48.927399    3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cba0aa8a-8a13-414c-8d84-7de5a8f6b945-lib-modules\") pod \"kube-proxy-fzt4z\" (UID: \"cba0aa8a-8a13-414c-8d84-7de5a8f6b945\") " pod="kube-system/kube-proxy-fzt4z"
	Sep 14 22:33:49 pause-354420 kubelet[3293]: I0914 22:33:49.104326    3293 scope.go:117] "RemoveContainer" containerID="c3dea3ffbbb69c2efc62e40eaa7289e4257f63a4a6c8057502d02f39ef994e4d"
	Sep 14 22:33:49 pause-354420 kubelet[3293]: I0914 22:33:49.104886    3293 scope.go:117] "RemoveContainer" containerID="51c99507fd79602d87b641e6a45bcb5b50d014c9adb29142908e3a900bc77e91"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-354420 -n pause-354420
helpers_test.go:261: (dbg) Run:  kubectl --context pause-354420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (54.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (257.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1276973236.exe start -p stopped-upgrade-948459 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.1276973236.exe start -p stopped-upgrade-948459 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.698638837s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.1276973236.exe -p stopped-upgrade-948459 stop
E0914 22:38:32.188680   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.1276973236.exe -p stopped-upgrade-948459 stop: (1m33.09698386s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-948459 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-948459 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (38.699414965s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-948459] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-948459 in cluster stopped-upgrade-948459
	* Restarting existing kvm2 VM for "stopped-upgrade-948459" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:39:51.806820   44499 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:39:51.806985   44499 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:39:51.806998   44499 out.go:309] Setting ErrFile to fd 2...
	I0914 22:39:51.807006   44499 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:39:51.807302   44499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:39:51.808067   44499 out.go:303] Setting JSON to false
	I0914 22:39:51.809483   44499 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4934,"bootTime":1694726258,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:39:51.809561   44499 start.go:138] virtualization: kvm guest
	I0914 22:39:51.812183   44499 out.go:177] * [stopped-upgrade-948459] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:39:51.813871   44499 notify.go:220] Checking for updates...
	I0914 22:39:51.813877   44499 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:39:51.815521   44499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:39:51.817326   44499 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:39:51.818852   44499 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:39:51.820173   44499 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:39:51.821520   44499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:39:51.823062   44499 config.go:182] Loaded profile config "stopped-upgrade-948459": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0914 22:39:51.823075   44499 start_flags.go:686] config upgrade: Driver=kvm2
	I0914 22:39:51.823082   44499 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503
	I0914 22:39:51.823145   44499 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/stopped-upgrade-948459/config.json ...
	I0914 22:39:51.823763   44499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:39:51.823809   44499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:39:51.840492   44499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36613
	I0914 22:39:51.840912   44499 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:39:51.841523   44499 main.go:141] libmachine: Using API Version  1
	I0914 22:39:51.841558   44499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:39:51.841893   44499 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:39:51.842081   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .DriverName
	I0914 22:39:51.844421   44499 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 22:39:51.845889   44499 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:39:51.846174   44499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:39:51.846208   44499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:39:51.860705   44499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I0914 22:39:51.861248   44499 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:39:51.861820   44499 main.go:141] libmachine: Using API Version  1
	I0914 22:39:51.861855   44499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:39:51.862258   44499 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:39:51.862460   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .DriverName
	I0914 22:39:51.900685   44499 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:39:51.902208   44499 start.go:298] selected driver: kvm2
	I0914 22:39:51.902224   44499 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-948459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.210 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0914 22:39:51.902333   44499 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:39:51.903039   44499 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:39:51.903117   44499 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:39:51.917556   44499 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:39:51.917995   44499 cni.go:84] Creating CNI manager for ""
	I0914 22:39:51.918014   44499 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0914 22:39:51.918021   44499 start_flags.go:321] config:
	{Name:stopped-upgrade-948459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.210 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0914 22:39:51.918169   44499 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:39:51.919910   44499 out.go:177] * Starting control plane node stopped-upgrade-948459 in cluster stopped-upgrade-948459
	I0914 22:39:51.921162   44499 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0914 22:39:52.023297   44499 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0914 22:39:52.023412   44499 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/stopped-upgrade-948459/config.json ...
	I0914 22:39:52.023493   44499 cache.go:107] acquiring lock: {Name:mkff58d72010a5253f2aeec8a75178e46da26ceb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:39:52.023527   44499 cache.go:107] acquiring lock: {Name:mk0047fd90520620e6fb8bf8a3cb9d27794b0683 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:39:52.023541   44499 cache.go:107] acquiring lock: {Name:mka6f0542a3a53240d4e6146669b2ad365734286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:39:52.023492   44499 cache.go:107] acquiring lock: {Name:mk7f016dc56396fc5cc2f1923f09058d0d2f3809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:39:52.023561   44499 cache.go:107] acquiring lock: {Name:mka08b933c610fdad9569d9776f61326fe3da113 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:39:52.023572   44499 cache.go:107] acquiring lock: {Name:mk493db77490a7ce3badfede780ef64553499771 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:39:52.023608   44499 cache.go:115] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0914 22:39:52.023646   44499 cache.go:115] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0914 22:39:52.023671   44499 cache.go:115] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0914 22:39:52.023586   44499 cache.go:115] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 22:39:52.023672   44499 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 99.131µs
	I0914 22:39:52.023554   44499 cache.go:107] acquiring lock: {Name:mk462c0360be954394d7742924c19fe7c63b7d00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:39:52.023681   44499 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 157.732µs
	I0914 22:39:52.023690   44499 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0914 22:39:52.023691   44499 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0914 22:39:52.023692   44499 cache.go:115] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0914 22:39:52.023578   44499 cache.go:107] acquiring lock: {Name:mka1aa152ae6383e50a98552651ec8f0af2d5a8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:39:52.023715   44499 cache.go:115] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0914 22:39:52.023723   44499 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 186.686µs
	I0914 22:39:52.023738   44499 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0914 22:39:52.023739   44499 cache.go:115] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0914 22:39:52.023747   44499 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 171.227µs
	I0914 22:39:52.023760   44499 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0914 22:39:52.023651   44499 cache.go:115] /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0914 22:39:52.023769   44499 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 225.237µs
	I0914 22:39:52.023777   44499 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0914 22:39:52.023722   44499 start.go:365] acquiring machines lock for stopped-upgrade-948459: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:39:52.023796   44499 start.go:369] acquired machines lock for "stopped-upgrade-948459" in 10.058µs
	I0914 22:39:52.023809   44499 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:39:52.023814   44499 fix.go:54] fixHost starting: minikube
	I0914 22:39:52.023644   44499 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 100.118µs
	I0914 22:39:52.023842   44499 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0914 22:39:52.023690   44499 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 205.661µs
	I0914 22:39:52.023856   44499 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 22:39:52.023706   44499 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 216.873µs
	I0914 22:39:52.023864   44499 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0914 22:39:52.023871   44499 cache.go:87] Successfully saved all images to host disk.
	I0914 22:39:52.024116   44499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:39:52.024146   44499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:39:52.039353   44499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43167
	I0914 22:39:52.039816   44499 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:39:52.040235   44499 main.go:141] libmachine: Using API Version  1
	I0914 22:39:52.040252   44499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:39:52.040602   44499 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:39:52.040798   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .DriverName
	I0914 22:39:52.041021   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetState
	I0914 22:39:52.042609   44499 fix.go:102] recreateIfNeeded on stopped-upgrade-948459: state=Stopped err=<nil>
	I0914 22:39:52.042634   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .DriverName
	W0914 22:39:52.042795   44499 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:39:52.044604   44499 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-948459" ...
	I0914 22:39:52.045887   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .Start
	I0914 22:39:52.046059   44499 main.go:141] libmachine: (stopped-upgrade-948459) Ensuring networks are active...
	I0914 22:39:52.046791   44499 main.go:141] libmachine: (stopped-upgrade-948459) Ensuring network default is active
	I0914 22:39:52.047160   44499 main.go:141] libmachine: (stopped-upgrade-948459) Ensuring network minikube-net is active
	I0914 22:39:52.047541   44499 main.go:141] libmachine: (stopped-upgrade-948459) Getting domain xml...
	I0914 22:39:52.048353   44499 main.go:141] libmachine: (stopped-upgrade-948459) Creating domain...
	I0914 22:39:53.317447   44499 main.go:141] libmachine: (stopped-upgrade-948459) Waiting to get IP...
	I0914 22:39:53.318318   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:39:53.318888   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:39:53.318967   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:39:53.318860   44534 retry.go:31] will retry after 277.990144ms: waiting for machine to come up
	I0914 22:39:53.598487   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:39:53.599001   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:39:53.599024   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:39:53.598979   44534 retry.go:31] will retry after 353.0095ms: waiting for machine to come up
	I0914 22:39:53.953639   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:39:53.954090   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:39:53.954121   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:39:53.954056   44534 retry.go:31] will retry after 358.546503ms: waiting for machine to come up
	I0914 22:39:54.314583   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:39:54.315165   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:39:54.315197   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:39:54.315128   44534 retry.go:31] will retry after 582.358504ms: waiting for machine to come up
	I0914 22:39:54.898996   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:39:54.899549   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:39:54.899575   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:39:54.899493   44534 retry.go:31] will retry after 467.780023ms: waiting for machine to come up
	I0914 22:39:55.369063   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:39:55.369543   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:39:55.369582   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:39:55.369498   44534 retry.go:31] will retry after 946.229067ms: waiting for machine to come up
	I0914 22:39:56.317835   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:39:56.318359   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:39:56.318403   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:39:56.318314   44534 retry.go:31] will retry after 1.016115066s: waiting for machine to come up
	I0914 22:39:57.336165   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:39:57.336684   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:39:57.336717   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:39:57.336620   44534 retry.go:31] will retry after 1.017289932s: waiting for machine to come up
	I0914 22:39:58.355786   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:39:58.356419   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:39:58.356454   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:39:58.356366   44534 retry.go:31] will retry after 1.137393359s: waiting for machine to come up
	I0914 22:39:59.495749   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:39:59.496272   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:39:59.496303   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:39:59.496225   44534 retry.go:31] will retry after 2.318876425s: waiting for machine to come up
	I0914 22:40:01.816601   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:01.817071   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:40:01.817101   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:40:01.817028   44534 retry.go:31] will retry after 2.535580255s: waiting for machine to come up
	I0914 22:40:04.353820   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:04.354324   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:40:04.354354   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:40:04.354250   44534 retry.go:31] will retry after 2.692678243s: waiting for machine to come up
	I0914 22:40:07.048166   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:07.048588   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:40:07.048625   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:40:07.048534   44534 retry.go:31] will retry after 2.747143941s: waiting for machine to come up
	I0914 22:40:09.797399   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:09.797861   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:40:09.797890   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:40:09.797818   44534 retry.go:31] will retry after 4.215549901s: waiting for machine to come up
	I0914 22:40:14.015137   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:14.015860   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:40:14.015886   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:40:14.015809   44534 retry.go:31] will retry after 5.148033147s: waiting for machine to come up
	I0914 22:40:19.165088   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:19.165675   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | unable to find current IP address of domain stopped-upgrade-948459 in network minikube-net
	I0914 22:40:19.165702   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | I0914 22:40:19.165624   44534 retry.go:31] will retry after 8.637434232s: waiting for machine to come up
	I0914 22:40:27.805078   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:27.805594   44499 main.go:141] libmachine: (stopped-upgrade-948459) Found IP for machine: 192.168.83.210
	I0914 22:40:27.805625   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has current primary IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:27.805637   44499 main.go:141] libmachine: (stopped-upgrade-948459) Reserving static IP address...
	I0914 22:40:27.806029   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "stopped-upgrade-948459", mac: "52:54:00:e1:8c:a3", ip: "192.168.83.210"} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:27.806052   44499 main.go:141] libmachine: (stopped-upgrade-948459) Reserved static IP address: 192.168.83.210
	I0914 22:40:27.806070   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-948459", mac: "52:54:00:e1:8c:a3", ip: "192.168.83.210"}
	I0914 22:40:27.806087   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | Getting to WaitForSSH function...
	I0914 22:40:27.806100   44499 main.go:141] libmachine: (stopped-upgrade-948459) Waiting for SSH to be available...
	I0914 22:40:27.808356   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:27.808670   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:27.808720   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:27.808826   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | Using SSH client type: external
	I0914 22:40:27.808885   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/stopped-upgrade-948459/id_rsa (-rw-------)
	I0914 22:40:27.808931   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/stopped-upgrade-948459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:40:27.808953   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | About to run SSH command:
	I0914 22:40:27.808964   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | exit 0
	I0914 22:40:27.938609   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | SSH cmd err, output: <nil>: 
	I0914 22:40:27.938955   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetConfigRaw
	I0914 22:40:27.939618   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetIP
	I0914 22:40:27.942826   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:27.943171   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:27.943202   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:27.943456   44499 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/stopped-upgrade-948459/config.json ...
	I0914 22:40:27.943737   44499 machine.go:88] provisioning docker machine ...
	I0914 22:40:27.943767   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .DriverName
	I0914 22:40:27.944005   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetMachineName
	I0914 22:40:27.944159   44499 buildroot.go:166] provisioning hostname "stopped-upgrade-948459"
	I0914 22:40:27.944193   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetMachineName
	I0914 22:40:27.944385   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHHostname
	I0914 22:40:27.946728   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:27.947094   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:27.947122   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:27.947274   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHPort
	I0914 22:40:27.947448   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:27.947605   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:27.947770   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHUsername
	I0914 22:40:27.947974   44499 main.go:141] libmachine: Using SSH client type: native
	I0914 22:40:27.948294   44499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.210 22 <nil> <nil>}
	I0914 22:40:27.948308   44499 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-948459 && echo "stopped-upgrade-948459" | sudo tee /etc/hostname
	I0914 22:40:28.065174   44499 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-948459
	
	I0914 22:40:28.065203   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHHostname
	I0914 22:40:28.067889   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:28.068243   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:28.068275   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:28.068413   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHPort
	I0914 22:40:28.068595   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:28.068747   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:28.068870   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHUsername
	I0914 22:40:28.069068   44499 main.go:141] libmachine: Using SSH client type: native
	I0914 22:40:28.069379   44499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.210 22 <nil> <nil>}
	I0914 22:40:28.069397   44499 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-948459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-948459/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-948459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:40:28.187191   44499 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:40:28.187218   44499 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:40:28.187234   44499 buildroot.go:174] setting up certificates
	I0914 22:40:28.187264   44499 provision.go:83] configureAuth start
	I0914 22:40:28.187279   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetMachineName
	I0914 22:40:28.187590   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetIP
	I0914 22:40:28.190351   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:28.190686   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:28.190725   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:28.190906   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHHostname
	I0914 22:40:28.193250   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:28.193518   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:28.193539   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:28.193643   44499 provision.go:138] copyHostCerts
	I0914 22:40:28.193683   44499 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:40:28.193692   44499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:40:28.193776   44499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:40:28.193883   44499 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:40:28.193892   44499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:40:28.193920   44499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:40:28.194009   44499 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:40:28.194018   44499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:40:28.194042   44499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:40:28.194095   44499 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-948459 san=[192.168.83.210 192.168.83.210 localhost 127.0.0.1 minikube stopped-upgrade-948459]
	I0914 22:40:28.489922   44499 provision.go:172] copyRemoteCerts
	I0914 22:40:28.489973   44499 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:40:28.489994   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHHostname
	I0914 22:40:28.492902   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:28.493281   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:28.493310   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:28.493470   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHPort
	I0914 22:40:28.493674   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:28.493812   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHUsername
	I0914 22:40:28.493949   44499 sshutil.go:53] new ssh client: &{IP:192.168.83.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/stopped-upgrade-948459/id_rsa Username:docker}
	I0914 22:40:28.577276   44499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:40:28.589934   44499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 22:40:28.602686   44499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:40:28.614266   44499 provision.go:86] duration metric: configureAuth took 426.992071ms
	I0914 22:40:28.614283   44499 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:40:28.614427   44499 config.go:182] Loaded profile config "stopped-upgrade-948459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0914 22:40:28.614492   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHHostname
	I0914 22:40:28.617464   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:28.617821   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:28.617857   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:28.618002   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHPort
	I0914 22:40:28.618204   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:28.618343   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:28.618453   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHUsername
	I0914 22:40:28.618578   44499 main.go:141] libmachine: Using SSH client type: native
	I0914 22:40:28.618894   44499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.210 22 <nil> <nil>}
	I0914 22:40:28.618917   44499 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:40:29.698978   44499 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:40:29.699007   44499 machine.go:91] provisioned docker machine in 1.755250123s
	I0914 22:40:29.699019   44499 start.go:300] post-start starting for "stopped-upgrade-948459" (driver="kvm2")
	I0914 22:40:29.699032   44499 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:40:29.699077   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .DriverName
	I0914 22:40:29.699397   44499 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:40:29.699424   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHHostname
	I0914 22:40:29.702259   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:29.702684   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:29.702739   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:29.702880   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHPort
	I0914 22:40:29.703082   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:29.703263   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHUsername
	I0914 22:40:29.703405   44499 sshutil.go:53] new ssh client: &{IP:192.168.83.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/stopped-upgrade-948459/id_rsa Username:docker}
	I0914 22:40:29.785757   44499 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:40:29.789364   44499 info.go:137] Remote host: Buildroot 2019.02.7
	I0914 22:40:29.789386   44499 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:40:29.789465   44499 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:40:29.789561   44499 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:40:29.789683   44499 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:40:29.794841   44499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:40:29.806638   44499 start.go:303] post-start completed in 107.604794ms
	I0914 22:40:29.806658   44499 fix.go:56] fixHost completed within 37.782842343s
	I0914 22:40:29.806702   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHHostname
	I0914 22:40:29.809207   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:29.809579   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:29.809606   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:29.809778   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHPort
	I0914 22:40:29.810025   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:29.810228   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:29.810354   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHUsername
	I0914 22:40:29.810499   44499 main.go:141] libmachine: Using SSH client type: native
	I0914 22:40:29.810788   44499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.210 22 <nil> <nil>}
	I0914 22:40:29.810799   44499 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 22:40:29.923960   44499 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731229.874748323
	
	I0914 22:40:29.923984   44499 fix.go:206] guest clock: 1694731229.874748323
	I0914 22:40:29.923993   44499 fix.go:219] Guest: 2023-09-14 22:40:29.874748323 +0000 UTC Remote: 2023-09-14 22:40:29.806662266 +0000 UTC m=+38.032741610 (delta=68.086057ms)
	I0914 22:40:29.924017   44499 fix.go:190] guest clock delta is within tolerance: 68.086057ms
	I0914 22:40:29.924024   44499 start.go:83] releasing machines lock for "stopped-upgrade-948459", held for 37.900218454s
	I0914 22:40:29.924057   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .DriverName
	I0914 22:40:29.924341   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetIP
	I0914 22:40:29.926961   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:29.927279   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:29.927311   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:29.927494   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .DriverName
	I0914 22:40:29.927981   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .DriverName
	I0914 22:40:29.928138   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .DriverName
	I0914 22:40:29.928237   44499 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:40:29.928275   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHHostname
	I0914 22:40:29.928531   44499 ssh_runner.go:195] Run: cat /version.json
	I0914 22:40:29.928558   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHHostname
	I0914 22:40:29.931074   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:29.931233   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:29.931450   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:29.931767   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:29.931808   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHPort
	I0914 22:40:29.931865   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8c:a3", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-09-14 23:40:16 +0000 UTC Type:0 Mac:52:54:00:e1:8c:a3 Iaid: IPaddr:192.168.83.210 Prefix:24 Hostname:stopped-upgrade-948459 Clientid:01:52:54:00:e1:8c:a3}
	I0914 22:40:29.932073   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:29.932130   44499 main.go:141] libmachine: (stopped-upgrade-948459) DBG | domain stopped-upgrade-948459 has defined IP address 192.168.83.210 and MAC address 52:54:00:e1:8c:a3 in network minikube-net
	I0914 22:40:29.932284   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHUsername
	I0914 22:40:29.932414   44499 sshutil.go:53] new ssh client: &{IP:192.168.83.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/stopped-upgrade-948459/id_rsa Username:docker}
	I0914 22:40:29.933403   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHPort
	I0914 22:40:29.933558   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHKeyPath
	I0914 22:40:29.933702   44499 main.go:141] libmachine: (stopped-upgrade-948459) Calling .GetSSHUsername
	I0914 22:40:29.933848   44499 sshutil.go:53] new ssh client: &{IP:192.168.83.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/stopped-upgrade-948459/id_rsa Username:docker}
	W0914 22:40:30.048278   44499 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 22:40:30.048370   44499 ssh_runner.go:195] Run: systemctl --version
	I0914 22:40:30.052975   44499 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:40:30.097190   44499 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:40:30.103480   44499 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:40:30.103548   44499 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:40:30.108372   44499 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 22:40:30.108388   44499 start.go:469] detecting cgroup driver to use...
	I0914 22:40:30.108434   44499 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:40:30.117609   44499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:40:30.125170   44499 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:40:30.125209   44499 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:40:30.132147   44499 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:40:30.139104   44499 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0914 22:40:30.145864   44499 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0914 22:40:30.145909   44499 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:40:30.237970   44499 docker.go:212] disabling docker service ...
	I0914 22:40:30.238027   44499 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:40:30.249471   44499 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:40:30.256241   44499 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:40:30.347800   44499 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:40:30.435651   44499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:40:30.444237   44499 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:40:30.454603   44499 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0914 22:40:30.454665   44499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:40:30.462384   44499 out.go:177] 
	W0914 22:40:30.463734   44499 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0914 22:40:30.463758   44499 out.go:239] * 
	* 
	W0914 22:40:30.464561   44499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 22:40:30.465510   44499 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-948459 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (257.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-344363 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-344363 --alsologtostderr -v=3: exit status 82 (2m1.400010128s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-344363"  ...
	* Stopping node "no-preload-344363"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:38:52.810098   44022 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:38:52.810416   44022 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:38:52.810432   44022 out.go:309] Setting ErrFile to fd 2...
	I0914 22:38:52.810440   44022 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:38:52.810802   44022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:38:52.811136   44022 out.go:303] Setting JSON to false
	I0914 22:38:52.811256   44022 mustload.go:65] Loading cluster: no-preload-344363
	I0914 22:38:52.811756   44022 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:38:52.811853   44022 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/config.json ...
	I0914 22:38:52.812119   44022 mustload.go:65] Loading cluster: no-preload-344363
	I0914 22:38:52.812303   44022 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:38:52.812347   44022 stop.go:39] StopHost: no-preload-344363
	I0914 22:38:52.812922   44022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:38:52.812984   44022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:38:52.828001   44022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0914 22:38:52.828755   44022 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:38:52.829400   44022 main.go:141] libmachine: Using API Version  1
	I0914 22:38:52.829425   44022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:38:52.829930   44022 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:38:52.832465   44022 out.go:177] * Stopping node "no-preload-344363"  ...
	I0914 22:38:52.834639   44022 main.go:141] libmachine: Stopping "no-preload-344363"...
	I0914 22:38:52.834721   44022 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:38:52.837672   44022 main.go:141] libmachine: (no-preload-344363) Calling .Stop
	I0914 22:38:52.842093   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 0/60
	I0914 22:38:53.844537   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 1/60
	I0914 22:38:54.845902   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 2/60
	I0914 22:38:55.847410   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 3/60
	I0914 22:38:56.848816   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 4/60
	I0914 22:38:57.850968   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 5/60
	I0914 22:38:58.852575   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 6/60
	I0914 22:38:59.853867   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 7/60
	I0914 22:39:00.855379   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 8/60
	I0914 22:39:02.141292   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 9/60
	I0914 22:39:03.143164   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 10/60
	I0914 22:39:04.145230   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 11/60
	I0914 22:39:05.147082   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 12/60
	I0914 22:39:06.148348   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 13/60
	I0914 22:39:07.150183   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 14/60
	I0914 22:39:08.152452   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 15/60
	I0914 22:39:09.153976   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 16/60
	I0914 22:39:10.155621   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 17/60
	I0914 22:39:11.157261   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 18/60
	I0914 22:39:12.158883   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 19/60
	I0914 22:39:13.161053   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 20/60
	I0914 22:39:14.162462   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 21/60
	I0914 22:39:15.164042   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 22/60
	I0914 22:39:16.165991   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 23/60
	I0914 22:39:17.167279   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 24/60
	I0914 22:39:18.169236   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 25/60
	I0914 22:39:19.171365   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 26/60
	I0914 22:39:20.172744   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 27/60
	I0914 22:39:21.174208   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 28/60
	I0914 22:39:22.175749   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 29/60
	I0914 22:39:23.178114   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 30/60
	I0914 22:39:24.179399   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 31/60
	I0914 22:39:25.180422   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 32/60
	I0914 22:39:26.181895   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 33/60
	I0914 22:39:27.184067   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 34/60
	I0914 22:39:28.186227   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 35/60
	I0914 22:39:29.187909   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 36/60
	I0914 22:39:30.189766   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 37/60
	I0914 22:39:31.191430   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 38/60
	I0914 22:39:32.192837   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 39/60
	I0914 22:39:33.194763   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 40/60
	I0914 22:39:34.196080   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 41/60
	I0914 22:39:35.197287   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 42/60
	I0914 22:39:36.198616   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 43/60
	I0914 22:39:37.200183   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 44/60
	I0914 22:39:38.202458   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 45/60
	I0914 22:39:39.203997   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 46/60
	I0914 22:39:40.205535   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 47/60
	I0914 22:39:41.206931   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 48/60
	I0914 22:39:42.208173   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 49/60
	I0914 22:39:43.210250   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 50/60
	I0914 22:39:44.211579   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 51/60
	I0914 22:39:45.212990   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 52/60
	I0914 22:39:46.214361   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 53/60
	I0914 22:39:47.215645   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 54/60
	I0914 22:39:48.217699   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 55/60
	I0914 22:39:49.219197   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 56/60
	I0914 22:39:50.220625   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 57/60
	I0914 22:39:51.222329   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 58/60
	I0914 22:39:52.223622   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 59/60
	I0914 22:39:53.224920   44022 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0914 22:39:53.224966   44022 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:39:53.224989   44022 retry.go:31] will retry after 797.09085ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:39:54.022924   44022 stop.go:39] StopHost: no-preload-344363
	I0914 22:39:54.023330   44022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:39:54.023397   44022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:39:54.039490   44022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41399
	I0914 22:39:54.039885   44022 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:39:54.040364   44022 main.go:141] libmachine: Using API Version  1
	I0914 22:39:54.040388   44022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:39:54.040735   44022 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:39:54.042580   44022 out.go:177] * Stopping node "no-preload-344363"  ...
	I0914 22:39:54.043962   44022 main.go:141] libmachine: Stopping "no-preload-344363"...
	I0914 22:39:54.043988   44022 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:39:54.045710   44022 main.go:141] libmachine: (no-preload-344363) Calling .Stop
	I0914 22:39:54.049240   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 0/60
	I0914 22:39:55.050431   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 1/60
	I0914 22:39:56.052062   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 2/60
	I0914 22:39:57.054024   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 3/60
	I0914 22:39:58.056003   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 4/60
	I0914 22:39:59.057458   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 5/60
	I0914 22:40:00.059888   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 6/60
	I0914 22:40:01.062439   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 7/60
	I0914 22:40:02.064465   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 8/60
	I0914 22:40:03.065956   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 9/60
	I0914 22:40:04.067720   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 10/60
	I0914 22:40:05.070106   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 11/60
	I0914 22:40:06.071931   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 12/60
	I0914 22:40:07.074020   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 13/60
	I0914 22:40:08.075555   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 14/60
	I0914 22:40:09.077320   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 15/60
	I0914 22:40:10.078877   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 16/60
	I0914 22:40:11.080455   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 17/60
	I0914 22:40:12.082714   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 18/60
	I0914 22:40:13.084269   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 19/60
	I0914 22:40:14.086601   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 20/60
	I0914 22:40:15.088116   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 21/60
	I0914 22:40:16.089450   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 22/60
	I0914 22:40:17.090882   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 23/60
	I0914 22:40:18.092363   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 24/60
	I0914 22:40:19.094384   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 25/60
	I0914 22:40:20.095985   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 26/60
	I0914 22:40:21.098245   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 27/60
	I0914 22:40:22.100331   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 28/60
	I0914 22:40:23.101968   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 29/60
	I0914 22:40:24.103754   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 30/60
	I0914 22:40:25.106565   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 31/60
	I0914 22:40:26.107951   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 32/60
	I0914 22:40:27.109258   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 33/60
	I0914 22:40:28.110571   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 34/60
	I0914 22:40:29.112748   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 35/60
	I0914 22:40:30.114097   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 36/60
	I0914 22:40:31.115185   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 37/60
	I0914 22:40:32.116537   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 38/60
	I0914 22:40:33.118236   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 39/60
	I0914 22:40:34.120017   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 40/60
	I0914 22:40:35.122008   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 41/60
	I0914 22:40:36.123330   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 42/60
	I0914 22:40:37.124817   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 43/60
	I0914 22:40:38.126148   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 44/60
	I0914 22:40:39.128422   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 45/60
	I0914 22:40:40.129981   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 46/60
	I0914 22:40:41.131417   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 47/60
	I0914 22:40:42.132867   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 48/60
	I0914 22:40:43.134539   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 49/60
	I0914 22:40:44.136520   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 50/60
	I0914 22:40:45.137902   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 51/60
	I0914 22:40:46.139506   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 52/60
	I0914 22:40:47.140932   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 53/60
	I0914 22:40:48.142109   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 54/60
	I0914 22:40:49.143646   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 55/60
	I0914 22:40:50.146005   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 56/60
	I0914 22:40:51.147278   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 57/60
	I0914 22:40:52.148586   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 58/60
	I0914 22:40:53.149940   44022 main.go:141] libmachine: (no-preload-344363) Waiting for machine to stop 59/60
	I0914 22:40:54.150789   44022 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0914 22:40:54.150828   44022 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:40:54.152955   44022 out.go:177] 
	W0914 22:40:54.154462   44022 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 22:40:54.154476   44022 out.go:239] * 
	* 
	W0914 22:40:54.156750   44022 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 22:40:54.157994   44022 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-344363 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-344363 -n no-preload-344363
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-344363 -n no-preload-344363: exit status 3 (18.411960987s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:41:12.571759   45196 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	E0914 22:41:12.571786   45196 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-344363" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-799144 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-799144 --alsologtostderr -v=3: exit status 82 (2m2.113530237s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-799144"  ...
	* Stopping node "default-k8s-diff-port-799144"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:40:14.323928   44758 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:40:14.324119   44758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:40:14.324132   44758 out.go:309] Setting ErrFile to fd 2...
	I0914 22:40:14.324139   44758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:40:14.324408   44758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:40:14.324723   44758 out.go:303] Setting JSON to false
	I0914 22:40:14.324829   44758 mustload.go:65] Loading cluster: default-k8s-diff-port-799144
	I0914 22:40:14.325304   44758 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:40:14.325396   44758 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/config.json ...
	I0914 22:40:14.325635   44758 mustload.go:65] Loading cluster: default-k8s-diff-port-799144
	I0914 22:40:14.325800   44758 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:40:14.325842   44758 stop.go:39] StopHost: default-k8s-diff-port-799144
	I0914 22:40:14.326385   44758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:40:14.326443   44758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:40:14.345411   44758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I0914 22:40:14.345885   44758 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:40:14.346629   44758 main.go:141] libmachine: Using API Version  1
	I0914 22:40:14.346658   44758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:40:14.347120   44758 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:40:14.349428   44758 out.go:177] * Stopping node "default-k8s-diff-port-799144"  ...
	I0914 22:40:14.351097   44758 main.go:141] libmachine: Stopping "default-k8s-diff-port-799144"...
	I0914 22:40:14.351128   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:40:14.352934   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Stop
	I0914 22:40:14.356673   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 0/60
	I0914 22:40:15.358198   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 1/60
	I0914 22:40:16.360291   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 2/60
	I0914 22:40:17.361753   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 3/60
	I0914 22:40:18.363158   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 4/60
	I0914 22:40:19.365381   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 5/60
	I0914 22:40:20.366950   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 6/60
	I0914 22:40:21.368347   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 7/60
	I0914 22:40:22.370006   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 8/60
	I0914 22:40:23.371554   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 9/60
	I0914 22:40:24.373553   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 10/60
	I0914 22:40:25.375083   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 11/60
	I0914 22:40:26.376551   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 12/60
	I0914 22:40:27.378023   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 13/60
	I0914 22:40:28.380302   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 14/60
	I0914 22:40:29.382189   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 15/60
	I0914 22:40:30.383989   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 16/60
	I0914 22:40:31.869125   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 17/60
	I0914 22:40:32.870499   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 18/60
	I0914 22:40:33.872014   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 19/60
	I0914 22:40:34.873974   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 20/60
	I0914 22:40:35.875401   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 21/60
	I0914 22:40:36.876663   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 22/60
	I0914 22:40:37.877832   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 23/60
	I0914 22:40:38.879193   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 24/60
	I0914 22:40:39.881387   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 25/60
	I0914 22:40:40.883183   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 26/60
	I0914 22:40:41.884547   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 27/60
	I0914 22:40:42.886099   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 28/60
	I0914 22:40:43.887559   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 29/60
	I0914 22:40:44.889530   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 30/60
	I0914 22:40:45.890839   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 31/60
	I0914 22:40:46.892561   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 32/60
	I0914 22:40:47.894014   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 33/60
	I0914 22:40:48.895296   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 34/60
	I0914 22:40:49.897359   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 35/60
	I0914 22:40:50.898662   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 36/60
	I0914 22:40:51.900220   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 37/60
	I0914 22:40:52.901891   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 38/60
	I0914 22:40:53.903696   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 39/60
	I0914 22:40:54.905556   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 40/60
	I0914 22:40:55.907228   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 41/60
	I0914 22:40:56.908638   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 42/60
	I0914 22:40:57.910140   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 43/60
	I0914 22:40:58.911946   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 44/60
	I0914 22:40:59.914000   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 45/60
	I0914 22:41:00.915854   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 46/60
	I0914 22:41:01.918017   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 47/60
	I0914 22:41:02.919261   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 48/60
	I0914 22:41:03.920549   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 49/60
	I0914 22:41:04.922809   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 50/60
	I0914 22:41:05.924227   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 51/60
	I0914 22:41:06.925571   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 52/60
	I0914 22:41:07.927227   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 53/60
	I0914 22:41:08.929016   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 54/60
	I0914 22:41:09.931045   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 55/60
	I0914 22:41:10.932379   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 56/60
	I0914 22:41:11.933829   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 57/60
	I0914 22:41:12.935246   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 58/60
	I0914 22:41:13.936800   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 59/60
	I0914 22:41:14.937551   44758 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0914 22:41:14.937630   44758 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:41:14.937648   44758 retry.go:31] will retry after 1.327384747s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:41:16.266095   44758 stop.go:39] StopHost: default-k8s-diff-port-799144
	I0914 22:41:16.266582   44758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:41:16.266639   44758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:41:16.281694   44758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0914 22:41:16.282145   44758 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:41:16.282574   44758 main.go:141] libmachine: Using API Version  1
	I0914 22:41:16.282602   44758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:41:16.282959   44758 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:41:16.284956   44758 out.go:177] * Stopping node "default-k8s-diff-port-799144"  ...
	I0914 22:41:16.286350   44758 main.go:141] libmachine: Stopping "default-k8s-diff-port-799144"...
	I0914 22:41:16.286370   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:41:16.288211   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Stop
	I0914 22:41:16.291379   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 0/60
	I0914 22:41:17.292747   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 1/60
	I0914 22:41:18.294114   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 2/60
	I0914 22:41:19.296105   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 3/60
	I0914 22:41:20.298272   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 4/60
	I0914 22:41:21.300146   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 5/60
	I0914 22:41:22.301490   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 6/60
	I0914 22:41:23.302878   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 7/60
	I0914 22:41:24.304337   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 8/60
	I0914 22:41:25.306081   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 9/60
	I0914 22:41:26.308090   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 10/60
	I0914 22:41:27.309992   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 11/60
	I0914 22:41:28.311692   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 12/60
	I0914 22:41:29.314346   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 13/60
	I0914 22:41:30.315723   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 14/60
	I0914 22:41:31.318225   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 15/60
	I0914 22:41:32.319713   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 16/60
	I0914 22:41:33.321943   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 17/60
	I0914 22:41:34.323261   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 18/60
	I0914 22:41:35.324831   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 19/60
	I0914 22:41:36.326689   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 20/60
	I0914 22:41:37.328376   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 21/60
	I0914 22:41:38.329902   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 22/60
	I0914 22:41:39.331233   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 23/60
	I0914 22:41:40.332521   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 24/60
	I0914 22:41:41.334309   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 25/60
	I0914 22:41:42.335795   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 26/60
	I0914 22:41:43.338184   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 27/60
	I0914 22:41:44.339622   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 28/60
	I0914 22:41:45.341359   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 29/60
	I0914 22:41:46.343730   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 30/60
	I0914 22:41:47.344928   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 31/60
	I0914 22:41:48.346369   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 32/60
	I0914 22:41:49.347758   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 33/60
	I0914 22:41:50.349196   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 34/60
	I0914 22:41:51.351184   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 35/60
	I0914 22:41:52.352562   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 36/60
	I0914 22:41:53.353699   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 37/60
	I0914 22:41:54.354957   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 38/60
	I0914 22:41:55.356398   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 39/60
	I0914 22:41:56.358237   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 40/60
	I0914 22:41:57.359797   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 41/60
	I0914 22:41:58.361452   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 42/60
	I0914 22:41:59.363149   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 43/60
	I0914 22:42:00.364524   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 44/60
	I0914 22:42:01.366088   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 45/60
	I0914 22:42:02.367325   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 46/60
	I0914 22:42:03.368567   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 47/60
	I0914 22:42:04.369854   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 48/60
	I0914 22:42:05.371147   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 49/60
	I0914 22:42:06.373241   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 50/60
	I0914 22:42:07.374632   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 51/60
	I0914 22:42:08.376001   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 52/60
	I0914 22:42:09.377936   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 53/60
	I0914 22:42:10.379486   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 54/60
	I0914 22:42:11.381108   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 55/60
	I0914 22:42:12.382413   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 56/60
	I0914 22:42:13.383855   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 57/60
	I0914 22:42:14.385078   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 58/60
	I0914 22:42:15.386512   44758 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for machine to stop 59/60
	I0914 22:42:16.387293   44758 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0914 22:42:16.387343   44758 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:42:16.389165   44758 out.go:177] 
	W0914 22:42:16.390466   44758 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 22:42:16.390495   44758 out.go:239] * 
	* 
	W0914 22:42:16.392789   44758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 22:42:16.394240   44758 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-799144 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144: exit status 3 (18.606206217s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:42:35.003759   45729 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.175:22: connect: no route to host
	E0914 22:42:35.003786   45729 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.175:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-799144" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-344363 -n no-preload-344363
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-344363 -n no-preload-344363: exit status 3 (3.171819116s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:41:15.743800   45297 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	E0914 22:41:15.743820   45297 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-344363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-344363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153988293s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-344363 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-344363 -n no-preload-344363
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-344363 -n no-preload-344363: exit status 3 (3.058077146s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:41:24.955817   45372 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host
	E0914 22:41:24.955835   45372 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.60:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-344363" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-588699 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-588699 --alsologtostderr -v=3: exit status 82 (2m1.693415682s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-588699"  ...
	* Stopping node "embed-certs-588699"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:41:46.242171   45605 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:41:46.242282   45605 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:41:46.242291   45605 out.go:309] Setting ErrFile to fd 2...
	I0914 22:41:46.242296   45605 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:41:46.242510   45605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:41:46.242783   45605 out.go:303] Setting JSON to false
	I0914 22:41:46.242864   45605 mustload.go:65] Loading cluster: embed-certs-588699
	I0914 22:41:46.243197   45605 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:41:46.243291   45605 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/config.json ...
	I0914 22:41:46.243493   45605 mustload.go:65] Loading cluster: embed-certs-588699
	I0914 22:41:46.243634   45605 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:41:46.243678   45605 stop.go:39] StopHost: embed-certs-588699
	I0914 22:41:46.244048   45605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:41:46.244110   45605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:41:46.258064   45605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39585
	I0914 22:41:46.258510   45605 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:41:46.259026   45605 main.go:141] libmachine: Using API Version  1
	I0914 22:41:46.259049   45605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:41:46.259367   45605 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:41:46.261684   45605 out.go:177] * Stopping node "embed-certs-588699"  ...
	I0914 22:41:46.263044   45605 main.go:141] libmachine: Stopping "embed-certs-588699"...
	I0914 22:41:46.263067   45605 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:41:46.264688   45605 main.go:141] libmachine: (embed-certs-588699) Calling .Stop
	I0914 22:41:46.268045   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 0/60
	I0914 22:41:47.270216   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 1/60
	I0914 22:41:48.272395   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 2/60
	I0914 22:41:49.273915   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 3/60
	I0914 22:41:50.275585   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 4/60
	I0914 22:41:51.277886   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 5/60
	I0914 22:41:52.279248   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 6/60
	I0914 22:41:53.280651   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 7/60
	I0914 22:41:54.282106   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 8/60
	I0914 22:41:55.283648   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 9/60
	I0914 22:41:56.286140   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 10/60
	I0914 22:41:57.287595   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 11/60
	I0914 22:41:58.289010   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 12/60
	I0914 22:41:59.290719   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 13/60
	I0914 22:42:00.292375   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 14/60
	I0914 22:42:01.294460   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 15/60
	I0914 22:42:02.296721   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 16/60
	I0914 22:42:03.298281   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 17/60
	I0914 22:42:04.299565   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 18/60
	I0914 22:42:05.300887   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 19/60
	I0914 22:42:06.303101   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 20/60
	I0914 22:42:07.304531   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 21/60
	I0914 22:42:08.305909   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 22/60
	I0914 22:42:09.307328   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 23/60
	I0914 22:42:10.308791   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 24/60
	I0914 22:42:11.310634   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 25/60
	I0914 22:42:12.312746   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 26/60
	I0914 22:42:13.314717   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 27/60
	I0914 22:42:14.316036   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 28/60
	I0914 22:42:15.317262   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 29/60
	I0914 22:42:16.319418   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 30/60
	I0914 22:42:17.320937   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 31/60
	I0914 22:42:18.322477   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 32/60
	I0914 22:42:19.323988   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 33/60
	I0914 22:42:20.325614   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 34/60
	I0914 22:42:21.327567   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 35/60
	I0914 22:42:22.328794   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 36/60
	I0914 22:42:23.330134   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 37/60
	I0914 22:42:24.331555   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 38/60
	I0914 22:42:25.332869   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 39/60
	I0914 22:42:26.334806   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 40/60
	I0914 22:42:27.336070   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 41/60
	I0914 22:42:28.337580   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 42/60
	I0914 22:42:29.338975   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 43/60
	I0914 22:42:30.340276   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 44/60
	I0914 22:42:31.342268   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 45/60
	I0914 22:42:32.343515   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 46/60
	I0914 22:42:33.344932   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 47/60
	I0914 22:42:34.346331   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 48/60
	I0914 22:42:35.347659   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 49/60
	I0914 22:42:36.349986   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 50/60
	I0914 22:42:37.351563   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 51/60
	I0914 22:42:38.353092   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 52/60
	I0914 22:42:39.354516   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 53/60
	I0914 22:42:40.356276   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 54/60
	I0914 22:42:41.358521   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 55/60
	I0914 22:42:42.360146   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 56/60
	I0914 22:42:43.361608   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 57/60
	I0914 22:42:44.362925   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 58/60
	I0914 22:42:45.364418   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 59/60
	I0914 22:42:46.365724   45605 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0914 22:42:46.365791   45605 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:42:46.365818   45605 retry.go:31] will retry after 1.406600195s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:42:47.773347   45605 stop.go:39] StopHost: embed-certs-588699
	I0914 22:42:47.773846   45605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:42:47.773904   45605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:42:47.788468   45605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42685
	I0914 22:42:47.788914   45605 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:42:47.789480   45605 main.go:141] libmachine: Using API Version  1
	I0914 22:42:47.789512   45605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:42:47.789822   45605 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:42:47.792859   45605 out.go:177] * Stopping node "embed-certs-588699"  ...
	I0914 22:42:47.794072   45605 main.go:141] libmachine: Stopping "embed-certs-588699"...
	I0914 22:42:47.794087   45605 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:42:47.795659   45605 main.go:141] libmachine: (embed-certs-588699) Calling .Stop
	I0914 22:42:47.798904   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 0/60
	I0914 22:42:48.800492   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 1/60
	I0914 22:42:49.801934   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 2/60
	I0914 22:42:50.803404   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 3/60
	I0914 22:42:51.804780   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 4/60
	I0914 22:42:52.806434   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 5/60
	I0914 22:42:53.807921   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 6/60
	I0914 22:42:54.809275   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 7/60
	I0914 22:42:55.810651   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 8/60
	I0914 22:42:56.812069   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 9/60
	I0914 22:42:57.814043   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 10/60
	I0914 22:42:58.815231   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 11/60
	I0914 22:42:59.816712   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 12/60
	I0914 22:43:00.818066   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 13/60
	I0914 22:43:01.819330   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 14/60
	I0914 22:43:02.821112   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 15/60
	I0914 22:43:03.822556   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 16/60
	I0914 22:43:04.823821   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 17/60
	I0914 22:43:05.825162   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 18/60
	I0914 22:43:06.826393   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 19/60
	I0914 22:43:07.828094   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 20/60
	I0914 22:43:08.829553   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 21/60
	I0914 22:43:09.830856   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 22/60
	I0914 22:43:10.832226   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 23/60
	I0914 22:43:11.833645   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 24/60
	I0914 22:43:12.835396   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 25/60
	I0914 22:43:13.836681   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 26/60
	I0914 22:43:14.837995   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 27/60
	I0914 22:43:15.839432   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 28/60
	I0914 22:43:16.841231   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 29/60
	I0914 22:43:17.843041   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 30/60
	I0914 22:43:18.844763   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 31/60
	I0914 22:43:19.846085   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 32/60
	I0914 22:43:20.847507   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 33/60
	I0914 22:43:21.849485   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 34/60
	I0914 22:43:22.851547   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 35/60
	I0914 22:43:23.852903   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 36/60
	I0914 22:43:24.854548   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 37/60
	I0914 22:43:25.856018   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 38/60
	I0914 22:43:26.857399   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 39/60
	I0914 22:43:27.859404   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 40/60
	I0914 22:43:28.860985   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 41/60
	I0914 22:43:29.862359   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 42/60
	I0914 22:43:30.864121   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 43/60
	I0914 22:43:31.865399   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 44/60
	I0914 22:43:32.867141   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 45/60
	I0914 22:43:33.868674   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 46/60
	I0914 22:43:34.870214   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 47/60
	I0914 22:43:35.871348   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 48/60
	I0914 22:43:36.872727   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 49/60
	I0914 22:43:37.874370   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 50/60
	I0914 22:43:38.875907   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 51/60
	I0914 22:43:39.877208   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 52/60
	I0914 22:43:40.878593   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 53/60
	I0914 22:43:41.879914   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 54/60
	I0914 22:43:42.881761   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 55/60
	I0914 22:43:43.883426   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 56/60
	I0914 22:43:44.884839   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 57/60
	I0914 22:43:45.886163   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 58/60
	I0914 22:43:46.887828   45605 main.go:141] libmachine: (embed-certs-588699) Waiting for machine to stop 59/60
	I0914 22:43:47.888716   45605 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0914 22:43:47.888756   45605 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:43:47.890757   45605 out.go:177] 
	W0914 22:43:47.892141   45605 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 22:43:47.892156   45605 out.go:239] * 
	* 
	W0914 22:43:47.894468   45605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 22:43:47.895757   45605 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-588699 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-588699 -n embed-certs-588699
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-588699 -n embed-certs-588699: exit status 3 (18.498324109s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:44:06.395773   46230 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host
	E0914 22:44:06.395791   46230 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-588699" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144: exit status 3 (3.168126891s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:42:38.171751   45793 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.175:22: connect: no route to host
	E0914 22:42:38.171772   45793 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.175:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-799144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-799144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152613043s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.175:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-799144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144: exit status 3 (3.062598778s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:42:47.387780   45885 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.175:22: connect: no route to host
	E0914 22:42:47.387799   45885 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.175:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-799144" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-930717 --alsologtostderr -v=3
E0914 22:43:15.238752   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 22:43:32.189146   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-930717 --alsologtostderr -v=3: exit status 82 (2m1.320252749s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-930717"  ...
	* Stopping node "old-k8s-version-930717"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:42:48.163073   46017 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:42:48.163194   46017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:42:48.163204   46017 out.go:309] Setting ErrFile to fd 2...
	I0914 22:42:48.163208   46017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:42:48.163394   46017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:42:48.163678   46017 out.go:303] Setting JSON to false
	I0914 22:42:48.163781   46017 mustload.go:65] Loading cluster: old-k8s-version-930717
	I0914 22:42:48.164083   46017 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:42:48.164155   46017 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:42:48.164332   46017 mustload.go:65] Loading cluster: old-k8s-version-930717
	I0914 22:42:48.164453   46017 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:42:48.164495   46017 stop.go:39] StopHost: old-k8s-version-930717
	I0914 22:42:48.164857   46017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:42:48.164903   46017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:42:48.178860   46017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I0914 22:42:48.179320   46017 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:42:48.180003   46017 main.go:141] libmachine: Using API Version  1
	I0914 22:42:48.180036   46017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:42:48.180376   46017 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:42:48.182833   46017 out.go:177] * Stopping node "old-k8s-version-930717"  ...
	I0914 22:42:48.184445   46017 main.go:141] libmachine: Stopping "old-k8s-version-930717"...
	I0914 22:42:48.184459   46017 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:42:48.186110   46017 main.go:141] libmachine: (old-k8s-version-930717) Calling .Stop
	I0914 22:42:48.189307   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 0/60
	I0914 22:42:49.190721   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 1/60
	I0914 22:42:50.192100   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 2/60
	I0914 22:42:51.193489   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 3/60
	I0914 22:42:52.194890   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 4/60
	I0914 22:42:53.196803   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 5/60
	I0914 22:42:54.198284   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 6/60
	I0914 22:42:55.199561   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 7/60
	I0914 22:42:56.200780   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 8/60
	I0914 22:42:57.202212   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 9/60
	I0914 22:42:58.203735   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 10/60
	I0914 22:42:59.205143   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 11/60
	I0914 22:43:00.206508   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 12/60
	I0914 22:43:01.207956   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 13/60
	I0914 22:43:02.209162   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 14/60
	I0914 22:43:03.211084   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 15/60
	I0914 22:43:04.212691   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 16/60
	I0914 22:43:05.214112   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 17/60
	I0914 22:43:06.215765   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 18/60
	I0914 22:43:07.217099   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 19/60
	I0914 22:43:08.219919   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 20/60
	I0914 22:43:09.221178   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 21/60
	I0914 22:43:10.222564   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 22/60
	I0914 22:43:11.223792   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 23/60
	I0914 22:43:12.225435   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 24/60
	I0914 22:43:13.227710   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 25/60
	I0914 22:43:14.229178   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 26/60
	I0914 22:43:15.230473   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 27/60
	I0914 22:43:16.231911   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 28/60
	I0914 22:43:17.233509   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 29/60
	I0914 22:43:18.235898   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 30/60
	I0914 22:43:19.237205   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 31/60
	I0914 22:43:20.238893   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 32/60
	I0914 22:43:21.240140   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 33/60
	I0914 22:43:22.241544   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 34/60
	I0914 22:43:23.243888   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 35/60
	I0914 22:43:24.245459   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 36/60
	I0914 22:43:25.246932   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 37/60
	I0914 22:43:26.248434   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 38/60
	I0914 22:43:27.250133   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 39/60
	I0914 22:43:28.252324   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 40/60
	I0914 22:43:29.253918   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 41/60
	I0914 22:43:30.255179   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 42/60
	I0914 22:43:31.256567   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 43/60
	I0914 22:43:32.257915   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 44/60
	I0914 22:43:33.260037   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 45/60
	I0914 22:43:34.262142   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 46/60
	I0914 22:43:35.263459   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 47/60
	I0914 22:43:36.264855   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 48/60
	I0914 22:43:37.266166   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 49/60
	I0914 22:43:38.268415   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 50/60
	I0914 22:43:39.269710   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 51/60
	I0914 22:43:40.271265   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 52/60
	I0914 22:43:41.272491   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 53/60
	I0914 22:43:42.273962   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 54/60
	I0914 22:43:43.276016   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 55/60
	I0914 22:43:44.277464   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 56/60
	I0914 22:43:45.278933   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 57/60
	I0914 22:43:46.280303   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 58/60
	I0914 22:43:47.281699   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 59/60
	I0914 22:43:48.283020   46017 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0914 22:43:48.283076   46017 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:43:48.283093   46017 retry.go:31] will retry after 1.039662147s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:43:49.323295   46017 stop.go:39] StopHost: old-k8s-version-930717
	I0914 22:43:49.323825   46017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:43:49.323879   46017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:43:49.337865   46017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0914 22:43:49.338272   46017 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:43:49.338707   46017 main.go:141] libmachine: Using API Version  1
	I0914 22:43:49.338736   46017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:43:49.339069   46017 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:43:49.341081   46017 out.go:177] * Stopping node "old-k8s-version-930717"  ...
	I0914 22:43:49.342727   46017 main.go:141] libmachine: Stopping "old-k8s-version-930717"...
	I0914 22:43:49.342745   46017 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:43:49.344192   46017 main.go:141] libmachine: (old-k8s-version-930717) Calling .Stop
	I0914 22:43:49.347611   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 0/60
	I0914 22:43:50.349013   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 1/60
	I0914 22:43:51.350573   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 2/60
	I0914 22:43:52.351798   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 3/60
	I0914 22:43:53.353057   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 4/60
	I0914 22:43:54.354645   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 5/60
	I0914 22:43:55.356047   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 6/60
	I0914 22:43:56.357231   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 7/60
	I0914 22:43:57.358505   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 8/60
	I0914 22:43:58.359921   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 9/60
	I0914 22:43:59.361980   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 10/60
	I0914 22:44:00.363267   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 11/60
	I0914 22:44:01.364848   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 12/60
	I0914 22:44:02.366042   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 13/60
	I0914 22:44:03.367524   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 14/60
	I0914 22:44:04.369314   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 15/60
	I0914 22:44:05.370647   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 16/60
	I0914 22:44:06.372175   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 17/60
	I0914 22:44:07.373421   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 18/60
	I0914 22:44:08.374846   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 19/60
	I0914 22:44:09.376603   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 20/60
	I0914 22:44:10.378059   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 21/60
	I0914 22:44:11.379488   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 22/60
	I0914 22:44:12.380835   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 23/60
	I0914 22:44:13.382385   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 24/60
	I0914 22:44:14.384247   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 25/60
	I0914 22:44:15.385770   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 26/60
	I0914 22:44:16.387220   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 27/60
	I0914 22:44:17.388699   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 28/60
	I0914 22:44:18.390187   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 29/60
	I0914 22:44:19.392519   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 30/60
	I0914 22:44:20.394293   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 31/60
	I0914 22:44:21.395907   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 32/60
	I0914 22:44:22.397207   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 33/60
	I0914 22:44:23.398649   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 34/60
	I0914 22:44:24.400402   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 35/60
	I0914 22:44:25.402002   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 36/60
	I0914 22:44:26.403253   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 37/60
	I0914 22:44:27.404613   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 38/60
	I0914 22:44:28.405970   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 39/60
	I0914 22:44:29.407789   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 40/60
	I0914 22:44:30.409118   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 41/60
	I0914 22:44:31.410427   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 42/60
	I0914 22:44:32.411827   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 43/60
	I0914 22:44:33.413167   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 44/60
	I0914 22:44:34.415187   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 45/60
	I0914 22:44:35.416515   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 46/60
	I0914 22:44:36.417955   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 47/60
	I0914 22:44:37.419337   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 48/60
	I0914 22:44:38.420756   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 49/60
	I0914 22:44:39.422662   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 50/60
	I0914 22:44:40.424060   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 51/60
	I0914 22:44:41.425587   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 52/60
	I0914 22:44:42.427109   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 53/60
	I0914 22:44:43.428552   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 54/60
	I0914 22:44:44.430241   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 55/60
	I0914 22:44:45.431739   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 56/60
	I0914 22:44:46.433107   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 57/60
	I0914 22:44:47.434598   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 58/60
	I0914 22:44:48.436500   46017 main.go:141] libmachine: (old-k8s-version-930717) Waiting for machine to stop 59/60
	I0914 22:44:49.437752   46017 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0914 22:44:49.437798   46017 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 22:44:49.439884   46017 out.go:177] 
	W0914 22:44:49.441268   46017 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 22:44:49.441288   46017 out.go:239] * 
	* 
	W0914 22:44:49.443424   46017 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 22:44:49.444928   46017 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-930717 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930717 -n old-k8s-version-930717
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930717 -n old-k8s-version-930717: exit status 3 (18.645077632s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:45:08.091836   46539 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.70:22: connect: no route to host
	E0914 22:45:08.091861   46539 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.70:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-930717" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-588699 -n embed-certs-588699
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-588699 -n embed-certs-588699: exit status 3 (3.16809029s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:44:09.563782   46312 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host
	E0914 22:44:09.563812   46312 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-588699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-588699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152914084s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-588699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-588699 -n embed-certs-588699
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-588699 -n embed-certs-588699: exit status 3 (3.063160334s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:44:18.779891   46371 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host
	E0914 22:44:18.779916   46371 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-588699" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930717 -n old-k8s-version-930717
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930717 -n old-k8s-version-930717: exit status 3 (3.167641217s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:45:11.259740   46613 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.70:22: connect: no route to host
	E0914 22:45:11.259762   46613 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.70:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-930717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-930717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153281069s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.70:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-930717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930717 -n old-k8s-version-930717
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930717 -n old-k8s-version-930717: exit status 3 (3.062705995s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 22:45:20.475942   46683 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.70:22: connect: no route to host
	E0914 22:45:20.475963   46683 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.70:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-930717" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0914 22:51:36.475394   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-14 23:00:16.008287893 +0000 UTC m=+5038.202629519
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-799144 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-799144 logs -n 25: (1.584962967s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-711912                           | kubernetes-upgrade-711912    | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:36 UTC |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-344363             | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:40 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799144  | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC |                     |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-344363                  | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-588699            | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799144       | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-930717        | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:51 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-588699                 | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-930717             | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC | 14 Sep 23 22:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:45:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:45:20.513575   46713 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:45:20.513835   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.513847   46713 out.go:309] Setting ErrFile to fd 2...
	I0914 22:45:20.513852   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.514030   46713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:45:20.514571   46713 out.go:303] Setting JSON to false
	I0914 22:45:20.515550   46713 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5263,"bootTime":1694726258,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:45:20.515607   46713 start.go:138] virtualization: kvm guest
	I0914 22:45:20.517738   46713 out.go:177] * [old-k8s-version-930717] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:45:20.519301   46713 notify.go:220] Checking for updates...
	I0914 22:45:20.519309   46713 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:45:20.520886   46713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:45:20.522525   46713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:45:20.524172   46713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:45:20.525826   46713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:45:20.527204   46713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:45:20.529068   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:45:20.529489   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.529542   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.548088   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0914 22:45:20.548488   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.548969   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.548985   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.549404   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.549555   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.551507   46713 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 22:45:20.552878   46713 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:45:20.553145   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.553176   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.566825   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0914 22:45:20.567181   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.567617   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.567646   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.568018   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.568195   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.601886   46713 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:45:20.603176   46713 start.go:298] selected driver: kvm2
	I0914 22:45:20.603188   46713 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.603284   46713 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:45:20.603926   46713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.603997   46713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:45:20.617678   46713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:45:20.618009   46713 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:45:20.618045   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:45:20.618062   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:45:20.618075   46713 start_flags.go:321] config:
	{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.618204   46713 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.619892   46713 out.go:177] * Starting control plane node old-k8s-version-930717 in cluster old-k8s-version-930717
	I0914 22:45:22.939748   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:20.621146   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:45:20.621171   46713 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0914 22:45:20.621184   46713 cache.go:57] Caching tarball of preloaded images
	I0914 22:45:20.621265   46713 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 22:45:20.621286   46713 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0914 22:45:20.621381   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:45:20.621551   46713 start.go:365] acquiring machines lock for old-k8s-version-930717: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:45:29.019730   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:32.091705   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:38.171724   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:41.243661   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:47.323733   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:50.395751   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:56.475703   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:59.547782   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:46:02.551591   45954 start.go:369] acquired machines lock for "default-k8s-diff-port-799144" in 3m15.018428257s
	I0914 22:46:02.551631   45954 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:02.551642   45954 fix.go:54] fixHost starting: 
	I0914 22:46:02.551944   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:02.551972   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:02.566520   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0914 22:46:02.566922   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:02.567373   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:02.567392   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:02.567734   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:02.567961   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:02.568128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:02.569692   45954 fix.go:102] recreateIfNeeded on default-k8s-diff-port-799144: state=Stopped err=<nil>
	I0914 22:46:02.569714   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	W0914 22:46:02.569887   45954 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:02.571684   45954 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799144" ...
	I0914 22:46:02.549458   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:02.549490   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:46:02.551419   45407 machine.go:91] provisioned docker machine in 4m37.435317847s
	I0914 22:46:02.551457   45407 fix.go:56] fixHost completed within 4m37.455553972s
	I0914 22:46:02.551462   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 4m37.455581515s
	W0914 22:46:02.551502   45407 start.go:688] error starting host: provision: host is not running
	W0914 22:46:02.551586   45407 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0914 22:46:02.551600   45407 start.go:703] Will try again in 5 seconds ...
	I0914 22:46:02.573354   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Start
	I0914 22:46:02.573535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring networks are active...
	I0914 22:46:02.574326   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network default is active
	I0914 22:46:02.574644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network mk-default-k8s-diff-port-799144 is active
	I0914 22:46:02.575046   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Getting domain xml...
	I0914 22:46:02.575767   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Creating domain...
	I0914 22:46:03.792613   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting to get IP...
	I0914 22:46:03.793573   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.793932   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.794029   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:03.793928   46868 retry.go:31] will retry after 250.767464ms: waiting for machine to come up
	I0914 22:46:04.046447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046928   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.046853   46868 retry.go:31] will retry after 320.29371ms: waiting for machine to come up
	I0914 22:46:04.368383   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368782   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368814   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.368726   46868 retry.go:31] will retry after 295.479496ms: waiting for machine to come up
	I0914 22:46:04.666192   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666655   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666680   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.666595   46868 retry.go:31] will retry after 572.033699ms: waiting for machine to come up
	I0914 22:46:05.240496   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240920   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240953   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.240872   46868 retry.go:31] will retry after 493.557238ms: waiting for machine to come up
	I0914 22:46:05.735682   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736201   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.736150   46868 retry.go:31] will retry after 848.645524ms: waiting for machine to come up
	I0914 22:46:06.586116   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586568   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:06.586473   46868 retry.go:31] will retry after 866.110647ms: waiting for machine to come up
	I0914 22:46:07.553803   45407 start.go:365] acquiring machines lock for no-preload-344363: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:46:07.454431   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454798   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454827   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:07.454743   46868 retry.go:31] will retry after 1.485337575s: waiting for machine to come up
	I0914 22:46:08.941761   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942136   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942177   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:08.942104   46868 retry.go:31] will retry after 1.640651684s: waiting for machine to come up
	I0914 22:46:10.584576   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584939   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:10.584838   46868 retry.go:31] will retry after 1.656716681s: waiting for machine to come up
	I0914 22:46:12.243599   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244096   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:12.244037   46868 retry.go:31] will retry after 2.692733224s: waiting for machine to come up
	I0914 22:46:14.939726   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940035   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940064   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:14.939986   46868 retry.go:31] will retry after 2.745837942s: waiting for machine to come up
	I0914 22:46:22.180177   46412 start.go:369] acquired machines lock for "embed-certs-588699" in 2m3.238409394s
	I0914 22:46:22.180244   46412 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:22.180256   46412 fix.go:54] fixHost starting: 
	I0914 22:46:22.180661   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:22.180706   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:22.196558   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0914 22:46:22.196900   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:22.197304   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:46:22.197326   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:22.197618   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:22.197808   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:22.197986   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:46:22.199388   46412 fix.go:102] recreateIfNeeded on embed-certs-588699: state=Stopped err=<nil>
	I0914 22:46:22.199423   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	W0914 22:46:22.199595   46412 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:22.202757   46412 out.go:177] * Restarting existing kvm2 VM for "embed-certs-588699" ...
	I0914 22:46:17.687397   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687911   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687937   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:17.687878   46868 retry.go:31] will retry after 3.174192278s: waiting for machine to come up
	I0914 22:46:20.866173   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866687   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Found IP for machine: 192.168.50.175
	I0914 22:46:20.866722   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has current primary IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866737   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserving static IP address...
	I0914 22:46:20.867209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.867245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | skip adding static IP to network mk-default-k8s-diff-port-799144 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"}
	I0914 22:46:20.867263   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserved static IP address: 192.168.50.175
	I0914 22:46:20.867290   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for SSH to be available...
	I0914 22:46:20.867303   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Getting to WaitForSSH function...
	I0914 22:46:20.869597   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.869960   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.869993   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.870103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH client type: external
	I0914 22:46:20.870137   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa (-rw-------)
	I0914 22:46:20.870193   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:20.870218   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | About to run SSH command:
	I0914 22:46:20.870237   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | exit 0
	I0914 22:46:20.959125   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:20.959456   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetConfigRaw
	I0914 22:46:20.960082   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:20.962512   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.962889   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.962915   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.963114   45954 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/config.json ...
	I0914 22:46:20.963282   45954 machine.go:88] provisioning docker machine ...
	I0914 22:46:20.963300   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:20.963509   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963682   45954 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799144"
	I0914 22:46:20.963709   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963899   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:20.966359   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966728   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.966757   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966956   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:20.967146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967287   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967420   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:20.967584   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:20.967963   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:20.967983   45954 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799144 && echo "default-k8s-diff-port-799144" | sudo tee /etc/hostname
	I0914 22:46:21.098114   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799144
	
	I0914 22:46:21.098158   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.100804   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101167   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.101208   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.101532   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101855   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.102028   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.102386   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.102406   45954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799144/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:21.225929   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:21.225964   45954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:21.225992   45954 buildroot.go:174] setting up certificates
	I0914 22:46:21.226007   45954 provision.go:83] configureAuth start
	I0914 22:46:21.226023   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:21.226299   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:21.229126   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229514   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.229555   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.231683   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.231992   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.232027   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.232179   45954 provision.go:138] copyHostCerts
	I0914 22:46:21.232233   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:21.232247   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:21.232321   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:21.232412   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:21.232421   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:21.232446   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:21.232542   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:21.232551   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:21.232572   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:21.232617   45954 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799144 san=[192.168.50.175 192.168.50.175 localhost 127.0.0.1 minikube default-k8s-diff-port-799144]
	I0914 22:46:21.489180   45954 provision.go:172] copyRemoteCerts
	I0914 22:46:21.489234   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:21.489257   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.491989   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492308   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.492334   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.492734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.492869   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.493038   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:21.579991   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0914 22:46:21.599819   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:21.619391   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:21.638607   45954 provision.go:86] duration metric: configureAuth took 412.585328ms
	I0914 22:46:21.638629   45954 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:21.638797   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:21.638867   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.641693   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642033   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.642067   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.642399   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642562   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.642900   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.643239   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.643257   45954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:21.928913   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:21.928940   45954 machine.go:91] provisioned docker machine in 965.645328ms
	I0914 22:46:21.928952   45954 start.go:300] post-start starting for "default-k8s-diff-port-799144" (driver="kvm2")
	I0914 22:46:21.928964   45954 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:21.928987   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:21.929377   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:21.929425   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.931979   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932350   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.932388   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932475   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.932704   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.932923   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.933059   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.020329   45954 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:22.024444   45954 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:22.024458   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:22.024513   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:22.024589   45954 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:22.024672   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:22.033456   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:22.054409   45954 start.go:303] post-start completed in 125.445528ms
	I0914 22:46:22.054427   45954 fix.go:56] fixHost completed within 19.502785226s
	I0914 22:46:22.054444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.057353   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057690   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.057721   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057925   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.058139   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058304   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058483   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.058657   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:22.059051   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:22.059065   45954 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:22.180023   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731582.133636857
	
	I0914 22:46:22.180044   45954 fix.go:206] guest clock: 1694731582.133636857
	I0914 22:46:22.180054   45954 fix.go:219] Guest: 2023-09-14 22:46:22.133636857 +0000 UTC Remote: 2023-09-14 22:46:22.054430307 +0000 UTC m=+214.661061156 (delta=79.20655ms)
	I0914 22:46:22.180078   45954 fix.go:190] guest clock delta is within tolerance: 79.20655ms
	I0914 22:46:22.180084   45954 start.go:83] releasing machines lock for "default-k8s-diff-port-799144", held for 19.628473828s
	I0914 22:46:22.180114   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.180408   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:22.183182   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183507   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.183543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183675   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184175   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184384   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184494   45954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:22.184535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.184627   45954 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:22.184662   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.187447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187604   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187813   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.187839   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187971   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.187986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.188024   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.188151   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.188153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188344   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188391   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188500   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.188519   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188618   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.303009   45954 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:22.308185   45954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:22.450504   45954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:22.455642   45954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:22.455700   45954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:22.468430   45954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:22.468453   45954 start.go:469] detecting cgroup driver to use...
	I0914 22:46:22.468509   45954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:22.483524   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:22.494650   45954 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:22.494706   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:22.506589   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:22.518370   45954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:22.619545   45954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:22.737486   45954 docker.go:212] disabling docker service ...
	I0914 22:46:22.737551   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:22.749267   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:22.759866   45954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:22.868561   45954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:22.973780   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:22.986336   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:23.004987   45954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:23.005042   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.013821   45954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:23.013889   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.022487   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.030875   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.038964   45954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:23.047246   45954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:23.054339   45954 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:23.054379   45954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:23.066649   45954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:23.077024   45954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:23.174635   45954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:23.337031   45954 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:23.337113   45954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:23.342241   45954 start.go:537] Will wait 60s for crictl version
	I0914 22:46:23.342308   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:46:23.345832   45954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:23.377347   45954 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:23.377433   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.425559   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.492770   45954 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:22.203936   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Start
	I0914 22:46:22.204098   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring networks are active...
	I0914 22:46:22.204740   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network default is active
	I0914 22:46:22.205158   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network mk-embed-certs-588699 is active
	I0914 22:46:22.205524   46412 main.go:141] libmachine: (embed-certs-588699) Getting domain xml...
	I0914 22:46:22.206216   46412 main.go:141] libmachine: (embed-certs-588699) Creating domain...
	I0914 22:46:23.529479   46412 main.go:141] libmachine: (embed-certs-588699) Waiting to get IP...
	I0914 22:46:23.530274   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.530639   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.530694   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.530608   46986 retry.go:31] will retry after 299.617651ms: waiting for machine to come up
	I0914 22:46:23.494065   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:23.496974   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497458   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:23.497490   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497694   45954 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:23.501920   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:23.517500   45954 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:23.517542   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:23.554344   45954 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:23.554403   45954 ssh_runner.go:195] Run: which lz4
	I0914 22:46:23.558745   45954 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:23.563443   45954 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:23.563488   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:25.365372   45954 crio.go:444] Took 1.806660 seconds to copy over tarball
	I0914 22:46:25.365442   45954 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:23.832332   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.833457   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.833488   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.832911   46986 retry.go:31] will retry after 315.838121ms: waiting for machine to come up
	I0914 22:46:24.150532   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.150980   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.151009   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.150942   46986 retry.go:31] will retry after 369.928332ms: waiting for machine to come up
	I0914 22:46:24.522720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.523232   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.523257   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.523145   46986 retry.go:31] will retry after 533.396933ms: waiting for machine to come up
	I0914 22:46:25.057818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.058371   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.058405   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.058318   46986 retry.go:31] will retry after 747.798377ms: waiting for machine to come up
	I0914 22:46:25.807422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.807912   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.807956   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.807874   46986 retry.go:31] will retry after 947.037376ms: waiting for machine to come up
	I0914 22:46:26.756214   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:26.756720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:26.756757   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:26.756689   46986 retry.go:31] will retry after 1.117164865s: waiting for machine to come up
	I0914 22:46:27.875432   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:27.875931   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:27.875953   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:27.875886   46986 retry.go:31] will retry after 1.117181084s: waiting for machine to come up
	I0914 22:46:28.197684   45954 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.832216899s)
	I0914 22:46:28.197710   45954 crio.go:451] Took 2.832313 seconds to extract the tarball
	I0914 22:46:28.197718   45954 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:28.236545   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:28.286349   45954 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:28.286374   45954 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:28.286449   45954 ssh_runner.go:195] Run: crio config
	I0914 22:46:28.344205   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:28.344231   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:28.344253   45954 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:28.344289   45954 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.175 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799144 NodeName:default-k8s-diff-port-799144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:28.344454   45954 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.175
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799144"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:28.344536   45954 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-799144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0914 22:46:28.344591   45954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:28.354383   45954 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:28.354459   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:28.363277   45954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0914 22:46:28.378875   45954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:28.393535   45954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0914 22:46:28.408319   45954 ssh_runner.go:195] Run: grep 192.168.50.175	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:28.411497   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:28.421507   45954 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144 for IP: 192.168.50.175
	I0914 22:46:28.421536   45954 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:28.421702   45954 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:28.421742   45954 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:28.421805   45954 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.key
	I0914 22:46:28.421858   45954 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key.0216c1e7
	I0914 22:46:28.421894   45954 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key
	I0914 22:46:28.421994   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:28.422020   45954 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:28.422027   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:28.422048   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:28.422074   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:28.422095   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:28.422139   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:28.422695   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:28.443528   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:46:28.463679   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:28.483317   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:28.503486   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:28.523709   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:28.544539   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:28.565904   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:28.587316   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:28.611719   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:28.632158   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:28.652227   45954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:28.667709   45954 ssh_runner.go:195] Run: openssl version
	I0914 22:46:28.673084   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:28.682478   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686693   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686747   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.691836   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:28.701203   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:28.710996   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715353   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715408   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.720765   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:28.730750   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:28.740782   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745186   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745250   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.750589   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:28.760675   45954 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:28.764920   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:28.770573   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:28.776098   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:28.783455   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:28.790699   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:28.797514   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:28.804265   45954 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-799144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:28.804376   45954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:28.804427   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:28.833994   45954 cri.go:89] found id: ""
	I0914 22:46:28.834051   45954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:28.843702   45954 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:28.843724   45954 kubeadm.go:636] restartCluster start
	I0914 22:46:28.843769   45954 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:28.852802   45954 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.854420   45954 kubeconfig.go:92] found "default-k8s-diff-port-799144" server: "https://192.168.50.175:8444"
	I0914 22:46:28.858058   45954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:28.866914   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.866968   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.877946   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.877969   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.878014   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.888579   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.389311   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.389420   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.401725   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.889346   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.889451   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.902432   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.388985   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.389062   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.401302   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.888853   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.888949   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.901032   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.389622   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.389733   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.405102   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.888685   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.888803   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.904300   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:32.388876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.388944   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.402419   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.995080   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:28.999205   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:28.999224   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:28.995414   46986 retry.go:31] will retry after 1.657878081s: waiting for machine to come up
	I0914 22:46:30.655422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:30.656029   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:30.656059   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:30.655960   46986 retry.go:31] will retry after 2.320968598s: waiting for machine to come up
	I0914 22:46:32.978950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:32.979423   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:32.979452   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:32.979369   46986 retry.go:31] will retry after 2.704173643s: waiting for machine to come up
	I0914 22:46:32.889585   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.889658   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.902514   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.388806   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.388906   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.405028   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.889633   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.889728   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.906250   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.388736   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.388810   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.403376   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.888851   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.888934   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.905873   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.389446   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.389516   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.404872   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.889475   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.889569   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.902431   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.388954   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.389054   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.401778   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.889442   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.889529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.902367   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:37.388925   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.389009   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.401860   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.685608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:35.686027   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:35.686064   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:35.685964   46986 retry.go:31] will retry after 2.240780497s: waiting for machine to come up
	I0914 22:46:37.928020   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:37.928402   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:37.928442   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:37.928354   46986 retry.go:31] will retry after 2.734049647s: waiting for machine to come up
	I0914 22:46:41.860186   46713 start.go:369] acquired machines lock for "old-k8s-version-930717" in 1m21.238611742s
	I0914 22:46:41.860234   46713 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:41.860251   46713 fix.go:54] fixHost starting: 
	I0914 22:46:41.860683   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:41.860738   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:41.877474   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0914 22:46:41.877964   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:41.878542   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:46:41.878568   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:41.878874   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:41.879057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:46:41.879276   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:46:41.880990   46713 fix.go:102] recreateIfNeeded on old-k8s-version-930717: state=Stopped err=<nil>
	I0914 22:46:41.881019   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	W0914 22:46:41.881175   46713 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:41.883128   46713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-930717" ...
	I0914 22:46:37.888876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.888950   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.901522   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.389056   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:38.389140   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:38.400632   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.867426   45954 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:38.867461   45954 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:38.867487   45954 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:38.867557   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:38.898268   45954 cri.go:89] found id: ""
	I0914 22:46:38.898328   45954 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:38.914871   45954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:38.924737   45954 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:38.924785   45954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934436   45954 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934455   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.042672   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.982954   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.158791   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.235541   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.312855   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:46:40.312926   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.328687   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.842859   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.343019   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.842336   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.342351   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.665315   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.665775   46412 main.go:141] libmachine: (embed-certs-588699) Found IP for machine: 192.168.61.205
	I0914 22:46:40.665795   46412 main.go:141] libmachine: (embed-certs-588699) Reserving static IP address...
	I0914 22:46:40.665807   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has current primary IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.666273   46412 main.go:141] libmachine: (embed-certs-588699) Reserved static IP address: 192.168.61.205
	I0914 22:46:40.666316   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.666334   46412 main.go:141] libmachine: (embed-certs-588699) Waiting for SSH to be available...
	I0914 22:46:40.666375   46412 main.go:141] libmachine: (embed-certs-588699) DBG | skip adding static IP to network mk-embed-certs-588699 - found existing host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"}
	I0914 22:46:40.666401   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Getting to WaitForSSH function...
	I0914 22:46:40.668206   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668515   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.668542   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668654   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH client type: external
	I0914 22:46:40.668689   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa (-rw-------)
	I0914 22:46:40.668716   46412 main.go:141] libmachine: (embed-certs-588699) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:40.668728   46412 main.go:141] libmachine: (embed-certs-588699) DBG | About to run SSH command:
	I0914 22:46:40.668736   46412 main.go:141] libmachine: (embed-certs-588699) DBG | exit 0
	I0914 22:46:40.751202   46412 main.go:141] libmachine: (embed-certs-588699) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:40.751584   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetConfigRaw
	I0914 22:46:40.752291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:40.754685   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755054   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.755087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755318   46412 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/config.json ...
	I0914 22:46:40.755578   46412 machine.go:88] provisioning docker machine ...
	I0914 22:46:40.755603   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:40.755799   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.755940   46412 buildroot.go:166] provisioning hostname "embed-certs-588699"
	I0914 22:46:40.755959   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.756109   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.758111   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758435   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.758481   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758547   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.758686   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758798   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758983   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.759108   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.759567   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.759586   46412 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-588699 && echo "embed-certs-588699" | sudo tee /etc/hostname
	I0914 22:46:40.882559   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-588699
	
	I0914 22:46:40.882615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.885741   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.886137   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886403   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.886635   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886810   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886964   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.887176   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.887633   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.887662   46412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-588699' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-588699/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-588699' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:41.007991   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:41.008024   46412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:41.008075   46412 buildroot.go:174] setting up certificates
	I0914 22:46:41.008103   46412 provision.go:83] configureAuth start
	I0914 22:46:41.008118   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:41.008615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.011893   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012262   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.012295   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012467   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.014904   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015343   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.015378   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015551   46412 provision.go:138] copyHostCerts
	I0914 22:46:41.015605   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:41.015618   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:41.015691   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:41.015847   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:41.015864   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:41.015897   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:41.015979   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:41.015989   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:41.016019   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:41.016080   46412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.embed-certs-588699 san=[192.168.61.205 192.168.61.205 localhost 127.0.0.1 minikube embed-certs-588699]
	I0914 22:46:41.134486   46412 provision.go:172] copyRemoteCerts
	I0914 22:46:41.134537   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:41.134559   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.137472   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137789   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.137818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137995   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.138216   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.138365   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.138536   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.224196   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:41.244551   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:46:41.267745   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:41.292472   46412 provision.go:86] duration metric: configureAuth took 284.355734ms
	I0914 22:46:41.292497   46412 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:41.292668   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:41.292748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.295661   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296010   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.296042   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296246   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.296469   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296652   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296836   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.297031   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.297522   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.297556   46412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:41.609375   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:41.609417   46412 machine.go:91] provisioned docker machine in 853.82264ms
	I0914 22:46:41.609431   46412 start.go:300] post-start starting for "embed-certs-588699" (driver="kvm2")
	I0914 22:46:41.609444   46412 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:41.609472   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.609831   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:41.609890   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.613037   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613497   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.613525   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613662   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.613854   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.614023   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.614142   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.704618   46412 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:41.709759   46412 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:41.709787   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:41.709867   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:41.709991   46412 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:41.710127   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:41.721261   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:41.742359   46412 start.go:303] post-start completed in 132.913862ms
	I0914 22:46:41.742387   46412 fix.go:56] fixHost completed within 19.562130605s
	I0914 22:46:41.742418   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.745650   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.746172   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746369   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.746564   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746781   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746944   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.747138   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.747629   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.747648   46412 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:41.860006   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731601.811427748
	
	I0914 22:46:41.860030   46412 fix.go:206] guest clock: 1694731601.811427748
	I0914 22:46:41.860040   46412 fix.go:219] Guest: 2023-09-14 22:46:41.811427748 +0000 UTC Remote: 2023-09-14 22:46:41.742391633 +0000 UTC m=+142.955285980 (delta=69.036115ms)
	I0914 22:46:41.860091   46412 fix.go:190] guest clock delta is within tolerance: 69.036115ms
	I0914 22:46:41.860098   46412 start.go:83] releasing machines lock for "embed-certs-588699", held for 19.679882828s
	I0914 22:46:41.860131   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.860411   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.863136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863584   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.863618   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863721   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864206   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864398   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864477   46412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:41.864514   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.864639   46412 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:41.864666   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.867568   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.867976   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.868028   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868147   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868248   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868373   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868579   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.868691   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868833   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.868876   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.869026   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.980624   46412 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:41.986113   46412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:42.134956   46412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:42.141030   46412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:42.141101   46412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:42.158635   46412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:42.158660   46412 start.go:469] detecting cgroup driver to use...
	I0914 22:46:42.158722   46412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:42.173698   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:42.184948   46412 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:42.185007   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:42.196434   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:42.208320   46412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:42.326624   46412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:42.459498   46412 docker.go:212] disabling docker service ...
	I0914 22:46:42.459567   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:42.472479   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:42.486651   46412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:42.636161   46412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:42.739841   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:42.758562   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:42.779404   46412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:42.779472   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.787902   46412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:42.787954   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.799513   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.811428   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.823348   46412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:42.835569   46412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:42.842820   46412 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:42.842885   46412 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:42.855225   46412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:42.863005   46412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:42.979756   46412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:43.181316   46412 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:43.181384   46412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:43.191275   46412 start.go:537] Will wait 60s for crictl version
	I0914 22:46:43.191343   46412 ssh_runner.go:195] Run: which crictl
	I0914 22:46:43.196264   46412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:43.228498   46412 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:43.228589   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.281222   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.341816   46412 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:43.343277   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:43.346473   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.346835   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:43.346882   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.347084   46412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:43.351205   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:43.364085   46412 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:43.364156   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:43.400558   46412 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:43.400634   46412 ssh_runner.go:195] Run: which lz4
	I0914 22:46:43.404906   46412 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:43.409239   46412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:43.409277   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:41.885236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Start
	I0914 22:46:41.885399   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring networks are active...
	I0914 22:46:41.886125   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network default is active
	I0914 22:46:41.886511   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network mk-old-k8s-version-930717 is active
	I0914 22:46:41.886855   46713 main.go:141] libmachine: (old-k8s-version-930717) Getting domain xml...
	I0914 22:46:41.887524   46713 main.go:141] libmachine: (old-k8s-version-930717) Creating domain...
	I0914 22:46:43.317748   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting to get IP...
	I0914 22:46:43.318757   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.319197   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.319288   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.319176   47160 retry.go:31] will retry after 287.487011ms: waiting for machine to come up
	I0914 22:46:43.608890   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.609712   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.609738   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.609656   47160 retry.go:31] will retry after 289.187771ms: waiting for machine to come up
	I0914 22:46:43.900234   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.900655   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.900679   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.900576   47160 retry.go:31] will retry after 433.007483ms: waiting for machine to come up
	I0914 22:46:44.335318   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.335775   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.335804   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.335727   47160 retry.go:31] will retry after 383.295397ms: waiting for machine to come up
	I0914 22:46:44.720415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.720967   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.721001   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.720856   47160 retry.go:31] will retry after 698.454643ms: waiting for machine to come up
	I0914 22:46:45.420833   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:45.421349   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:45.421391   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:45.421297   47160 retry.go:31] will retry after 938.590433ms: waiting for machine to come up
	I0914 22:46:42.842954   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.867206   45954 api_server.go:72] duration metric: took 2.554352134s to wait for apiserver process to appear ...
	I0914 22:46:42.867238   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:46:42.867257   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.755748   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:46:46.755780   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:46:46.755832   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.873209   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:46.873243   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.373637   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.391311   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.391349   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.873646   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.880286   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.880323   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:48.373423   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:48.389682   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:46:48.415694   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:46:48.415727   45954 api_server.go:131] duration metric: took 5.548481711s to wait for apiserver health ...
	I0914 22:46:48.415739   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.415748   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.417375   45954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:46:45.238555   46412 crio.go:444] Took 1.833681 seconds to copy over tarball
	I0914 22:46:45.238634   46412 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:48.251155   46412 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012492519s)
	I0914 22:46:48.251176   46412 crio.go:451] Took 3.012596 seconds to extract the tarball
	I0914 22:46:48.251184   46412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:48.290336   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:48.338277   46412 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:48.338302   46412 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:48.338378   46412 ssh_runner.go:195] Run: crio config
	I0914 22:46:48.402542   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.402564   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.402583   46412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:48.402604   46412 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.205 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-588699 NodeName:embed-certs-588699 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:48.402791   46412 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-588699"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:48.402883   46412 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-588699 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:46:48.402958   46412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:48.414406   46412 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:48.414484   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:48.426437   46412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0914 22:46:48.445351   46412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:48.463696   46412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0914 22:46:48.481887   46412 ssh_runner.go:195] Run: grep 192.168.61.205	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:48.485825   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:48.500182   46412 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699 for IP: 192.168.61.205
	I0914 22:46:48.500215   46412 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:48.500362   46412 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:48.500417   46412 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:48.500514   46412 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/client.key
	I0914 22:46:48.500600   46412 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key.8dac69f7
	I0914 22:46:48.500726   46412 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key
	I0914 22:46:48.500885   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:48.500926   46412 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:48.500942   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:48.500976   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:48.501008   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:48.501039   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:48.501096   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:48.501918   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:48.528790   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:46:48.558557   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:48.583664   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:48.608274   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:48.631638   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:48.655163   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:48.677452   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:48.700443   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:48.724547   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:48.751559   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:48.778910   46412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:48.794369   46412 ssh_runner.go:195] Run: openssl version
	I0914 22:46:48.799778   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:48.809263   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814790   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814848   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.820454   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:48.829942   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:46.361228   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:46.361816   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:46.361846   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:46.361795   47160 retry.go:31] will retry after 1.00738994s: waiting for machine to come up
	I0914 22:46:47.370525   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:47.370964   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:47.370991   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:47.370921   47160 retry.go:31] will retry after 1.441474351s: waiting for machine to come up
	I0914 22:46:48.813921   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:48.814415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:48.814447   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:48.814362   47160 retry.go:31] will retry after 1.497562998s: waiting for machine to come up
	I0914 22:46:50.313674   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:50.314191   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:50.314221   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:50.314137   47160 retry.go:31] will retry after 1.620308161s: waiting for machine to come up
	I0914 22:46:48.418825   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:46:48.456715   45954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:46:48.496982   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:46:48.515172   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:46:48.515209   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:46:48.515223   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:46:48.515234   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:46:48.515247   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:46:48.515261   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:46:48.515272   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:46:48.515285   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:46:48.515295   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:46:48.515307   45954 system_pods.go:74] duration metric: took 18.305048ms to wait for pod list to return data ...
	I0914 22:46:48.515320   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:46:48.518842   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:46:48.518875   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:46:48.518888   45954 node_conditions.go:105] duration metric: took 3.562448ms to run NodePressure ...
	I0914 22:46:48.518908   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:50.951051   45954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.432118027s)
	I0914 22:46:50.951087   45954 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959708   45954 kubeadm.go:787] kubelet initialised
	I0914 22:46:50.959735   45954 kubeadm.go:788] duration metric: took 8.637125ms waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959745   45954 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:50.966214   45954 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.975076   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975106   45954 pod_ready.go:81] duration metric: took 8.863218ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.975118   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975129   45954 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.982438   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982471   45954 pod_ready.go:81] duration metric: took 7.330437ms waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.982485   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982493   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.991067   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991102   45954 pod_ready.go:81] duration metric: took 8.574268ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.991115   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991125   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.006696   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006732   45954 pod_ready.go:81] duration metric: took 15.595604ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.006745   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006755   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.354645   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354678   45954 pod_ready.go:81] duration metric: took 347.913938ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.354690   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354702   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.754959   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.754998   45954 pod_ready.go:81] duration metric: took 400.283619ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.755012   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.755022   45954 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:52.156253   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156299   45954 pod_ready.go:81] duration metric: took 401.260791ms waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:52.156314   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156327   45954 pod_ready.go:38] duration metric: took 1.196571114s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:52.156352   45954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:46:52.169026   45954 ops.go:34] apiserver oom_adj: -16
	I0914 22:46:52.169049   45954 kubeadm.go:640] restartCluster took 23.325317121s
	I0914 22:46:52.169059   45954 kubeadm.go:406] StartCluster complete in 23.364799998s
	I0914 22:46:52.169079   45954 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.169161   45954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:46:52.171787   45954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.172077   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:46:52.172229   45954 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:46:52.172310   45954 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172332   45954 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-799144"
	I0914 22:46:52.172325   45954 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799144"
	W0914 22:46:52.172340   45954 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:46:52.172347   45954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799144"
	I0914 22:46:52.172351   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:52.172394   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.172394   45954 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172424   45954 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.172436   45954 addons.go:240] addon metrics-server should already be in state true
	I0914 22:46:52.172500   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.173205   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173252   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173383   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173451   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173744   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173822   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.178174   45954 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-799144" context rescaled to 1 replicas
	I0914 22:46:52.178208   45954 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:46:52.180577   45954 out.go:177] * Verifying Kubernetes components...
	I0914 22:46:52.182015   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:46:52.194030   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0914 22:46:52.194040   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38817
	I0914 22:46:52.194506   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.194767   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.195059   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195078   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195219   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195235   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195420   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.195642   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.195715   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.196346   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.196392   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.198560   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0914 22:46:52.199130   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.199612   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.199641   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.199995   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.200530   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.200575   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.206536   45954 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.206558   45954 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:46:52.206584   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.206941   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.206973   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.215857   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0914 22:46:52.216266   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.216801   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.216825   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.217297   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.217484   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.220211   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40683
	I0914 22:46:52.220740   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.221296   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.221314   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.221798   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.221986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.222185   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.224162   45954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:46:52.224261   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.225483   45954 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.225494   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:46:52.225511   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.225526   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0914 22:46:52.227067   45954 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:46:52.225976   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.228337   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:46:52.228354   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:46:52.228373   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.228750   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.228764   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.228959   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229601   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.229674   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.229702   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229908   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.230068   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.230171   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.230203   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.230280   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.230503   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.232673   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233097   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.233153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.233536   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.233684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.233821   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.251500   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43473
	I0914 22:46:52.252069   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.252702   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.252722   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.253171   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.253419   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.255233   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.255574   45954 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.255591   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:46:52.255609   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.258620   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.259178   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259379   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.259584   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.259754   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.259961   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.350515   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.367291   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:46:52.367309   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:46:52.413141   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:46:52.413170   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:46:52.419647   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.462672   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:52.462698   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:46:52.519331   45954 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:46:52.519330   45954 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:52.530851   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:53.719523   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368967292s)
	I0914 22:46:53.719575   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719582   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299890259s)
	I0914 22:46:53.719616   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719638   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.719589   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720079   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720083   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720097   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720101   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720107   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720111   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720121   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720080   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720404   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720414   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720425   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720501   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720525   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720538   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720553   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720804   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720822   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.721724   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.190817165s)
	I0914 22:46:53.721771   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.721784   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.722084   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.722100   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.722089   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.722115   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.722128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.723592   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.723602   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.723614   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.723631   45954 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-799144"
	I0914 22:46:53.725666   45954 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:46:48.840421   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.179960   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.180026   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.185490   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:49.194744   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:49.205937   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210532   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210582   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.215917   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:49.225393   46412 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:49.229604   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:49.234795   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:49.239907   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:49.245153   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:49.250558   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:49.256142   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:49.261518   46412 kubeadm.go:404] StartCluster: {Name:embed-certs-588699 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:49.261618   46412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:49.261687   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:49.291460   46412 cri.go:89] found id: ""
	I0914 22:46:49.291560   46412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:49.300496   46412 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:49.300558   46412 kubeadm.go:636] restartCluster start
	I0914 22:46:49.300616   46412 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:49.309827   46412 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.311012   46412 kubeconfig.go:92] found "embed-certs-588699" server: "https://192.168.61.205:8443"
	I0914 22:46:49.313336   46412 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:49.321470   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.321528   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.332257   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.332275   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.332320   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.345427   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.846146   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.846240   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.859038   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.345492   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.345583   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.358070   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.845544   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.845605   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.861143   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.345602   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.345675   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.357406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.845964   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.846082   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.860079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.346093   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.346159   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.360952   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.845612   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.845717   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.860504   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:53.345991   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.360947   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.936297   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:51.936809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:51.936840   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:51.936747   47160 retry.go:31] will retry after 2.284330296s: waiting for machine to come up
	I0914 22:46:54.222960   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:54.223478   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:54.223530   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:54.223417   47160 retry.go:31] will retry after 3.537695113s: waiting for machine to come up
	I0914 22:46:53.726984   45954 addons.go:502] enable addons completed in 1.554762762s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:46:54.641725   45954 node_ready.go:58] node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:57.141217   45954 node_ready.go:49] node "default-k8s-diff-port-799144" has status "Ready":"True"
	I0914 22:46:57.141240   45954 node_ready.go:38] duration metric: took 4.621872993s waiting for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:57.141250   45954 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:57.151019   45954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162159   45954 pod_ready.go:92] pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace has status "Ready":"True"
	I0914 22:46:57.162180   45954 pod_ready.go:81] duration metric: took 11.133949ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162189   45954 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:53.845734   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.845815   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.858406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.346078   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.346138   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.360079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.845738   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.845801   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.861945   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.346533   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.346627   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.360445   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.845577   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.845681   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.856800   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.346374   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.346461   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.357724   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.846264   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.846376   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.857963   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.346006   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.357336   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.845877   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.845944   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.857310   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:58.345855   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.345925   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.357766   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.762315   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:57.762689   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:57.762714   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:57.762651   47160 retry.go:31] will retry after 3.773493672s: waiting for machine to come up
	I0914 22:46:59.185077   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:01.185320   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:02.912525   45407 start.go:369] acquired machines lock for "no-preload-344363" in 55.358672707s
	I0914 22:47:02.912580   45407 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:47:02.912592   45407 fix.go:54] fixHost starting: 
	I0914 22:47:02.913002   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:47:02.913035   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:47:02.932998   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0914 22:47:02.933535   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:47:02.933956   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:47:02.933977   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:47:02.934303   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:47:02.934484   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:02.934627   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:47:02.936412   45407 fix.go:102] recreateIfNeeded on no-preload-344363: state=Stopped err=<nil>
	I0914 22:47:02.936438   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	W0914 22:47:02.936601   45407 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:47:02.938235   45407 out.go:177] * Restarting existing kvm2 VM for "no-preload-344363" ...
	I0914 22:46:58.845728   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.845806   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.859436   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:59.322167   46412 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:59.322206   46412 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:59.322218   46412 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:59.322278   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:59.352268   46412 cri.go:89] found id: ""
	I0914 22:46:59.352371   46412 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:59.366742   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:59.374537   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:59.374598   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382227   46412 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382251   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:59.486171   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.268311   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.462362   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.528925   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.601616   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:00.601697   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:00.623311   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.140972   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.640574   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.141044   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.640374   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.140881   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.166662   46412 api_server.go:72] duration metric: took 2.565044214s to wait for apiserver process to appear ...
	I0914 22:47:03.166688   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:03.166703   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:01.540578   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541058   46713 main.go:141] libmachine: (old-k8s-version-930717) Found IP for machine: 192.168.72.70
	I0914 22:47:01.541095   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has current primary IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541106   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserving static IP address...
	I0914 22:47:01.541552   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserved static IP address: 192.168.72.70
	I0914 22:47:01.541579   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting for SSH to be available...
	I0914 22:47:01.541613   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.541646   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | skip adding static IP to network mk-old-k8s-version-930717 - found existing host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"}
	I0914 22:47:01.541672   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Getting to WaitForSSH function...
	I0914 22:47:01.543898   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544285   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.544317   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544428   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH client type: external
	I0914 22:47:01.544451   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa (-rw-------)
	I0914 22:47:01.544499   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:01.544518   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | About to run SSH command:
	I0914 22:47:01.544552   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | exit 0
	I0914 22:47:01.639336   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:01.639694   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetConfigRaw
	I0914 22:47:01.640324   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.642979   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643345   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.643389   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643643   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:47:01.643833   46713 machine.go:88] provisioning docker machine ...
	I0914 22:47:01.643855   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:01.644085   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644249   46713 buildroot.go:166] provisioning hostname "old-k8s-version-930717"
	I0914 22:47:01.644272   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644434   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.646429   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.646771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.646819   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.647008   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.647209   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647360   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647536   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.647737   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.648245   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.648270   46713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-930717 && echo "old-k8s-version-930717" | sudo tee /etc/hostname
	I0914 22:47:01.789438   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-930717
	
	I0914 22:47:01.789472   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.792828   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793229   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.793277   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793459   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.793644   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793778   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793953   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.794120   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.794459   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.794478   46713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-930717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-930717/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-930717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:01.928496   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:01.928536   46713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:01.928567   46713 buildroot.go:174] setting up certificates
	I0914 22:47:01.928586   46713 provision.go:83] configureAuth start
	I0914 22:47:01.928609   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.928914   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.931976   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932368   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.932398   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932542   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.934939   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935311   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.935344   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935480   46713 provision.go:138] copyHostCerts
	I0914 22:47:01.935537   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:01.935548   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:01.935620   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:01.935775   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:01.935789   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:01.935824   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:01.935970   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:01.935981   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:01.936010   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:01.936086   46713 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-930717 san=[192.168.72.70 192.168.72.70 localhost 127.0.0.1 minikube old-k8s-version-930717]
	I0914 22:47:02.167446   46713 provision.go:172] copyRemoteCerts
	I0914 22:47:02.167510   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:02.167534   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.170442   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.170862   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.170900   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.171089   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.171302   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.171496   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.171645   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.267051   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:02.289098   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 22:47:02.312189   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:02.334319   46713 provision.go:86] duration metric: configureAuth took 405.716896ms
	I0914 22:47:02.334346   46713 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:02.334555   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:47:02.334638   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.337255   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337605   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.337637   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.337949   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338100   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338240   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.338384   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.338859   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.338890   46713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:02.654307   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:02.654332   46713 machine.go:91] provisioned docker machine in 1.010485195s
	I0914 22:47:02.654345   46713 start.go:300] post-start starting for "old-k8s-version-930717" (driver="kvm2")
	I0914 22:47:02.654358   46713 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:02.654382   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.654747   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:02.654782   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.657773   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658153   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.658182   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658425   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.658630   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.658812   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.659001   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.750387   46713 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:02.754444   46713 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:02.754468   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:02.754545   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:02.754654   46713 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:02.754762   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:02.765781   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:02.788047   46713 start.go:303] post-start completed in 133.686385ms
	I0914 22:47:02.788072   46713 fix.go:56] fixHost completed within 20.927830884s
	I0914 22:47:02.788098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.791051   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791408   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.791441   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791628   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.791840   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792041   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792215   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.792383   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.792817   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.792836   46713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:02.912359   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731622.856601606
	
	I0914 22:47:02.912381   46713 fix.go:206] guest clock: 1694731622.856601606
	I0914 22:47:02.912391   46713 fix.go:219] Guest: 2023-09-14 22:47:02.856601606 +0000 UTC Remote: 2023-09-14 22:47:02.788077838 +0000 UTC m=+102.306332554 (delta=68.523768ms)
	I0914 22:47:02.912413   46713 fix.go:190] guest clock delta is within tolerance: 68.523768ms
	I0914 22:47:02.912424   46713 start.go:83] releasing machines lock for "old-k8s-version-930717", held for 21.052207532s
	I0914 22:47:02.912457   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.912730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:02.915769   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916200   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.916265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916453   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917073   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917245   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917352   46713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:02.917397   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.917535   46713 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:02.917563   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.920256   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920363   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920656   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920695   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920724   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920744   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920959   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921261   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921282   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921431   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921489   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921567   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.921635   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:03.014070   46713 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:03.047877   46713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:03.192347   46713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:03.200249   46713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:03.200324   46713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:03.215110   46713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:03.215138   46713 start.go:469] detecting cgroup driver to use...
	I0914 22:47:03.215201   46713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:03.228736   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:03.241326   46713 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:03.241377   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:03.253001   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:03.264573   46713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:03.371107   46713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:03.512481   46713 docker.go:212] disabling docker service ...
	I0914 22:47:03.512554   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:03.526054   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:03.537583   46713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:03.662087   46713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:03.793448   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:03.807574   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:03.828240   46713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0914 22:47:03.828311   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.842435   46713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:03.842490   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.856199   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.867448   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.878222   46713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:03.891806   46713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:03.899686   46713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:03.899740   46713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:03.912584   46713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:03.920771   46713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:04.040861   46713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:04.230077   46713 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:04.230147   46713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:04.235664   46713 start.go:537] Will wait 60s for crictl version
	I0914 22:47:04.235726   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:04.239737   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:04.279680   46713 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:04.279755   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.329363   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.389025   46713 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0914 22:47:02.939505   45407 main.go:141] libmachine: (no-preload-344363) Calling .Start
	I0914 22:47:02.939701   45407 main.go:141] libmachine: (no-preload-344363) Ensuring networks are active...
	I0914 22:47:02.940415   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network default is active
	I0914 22:47:02.940832   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network mk-no-preload-344363 is active
	I0914 22:47:02.941287   45407 main.go:141] libmachine: (no-preload-344363) Getting domain xml...
	I0914 22:47:02.942103   45407 main.go:141] libmachine: (no-preload-344363) Creating domain...
	I0914 22:47:04.410207   45407 main.go:141] libmachine: (no-preload-344363) Waiting to get IP...
	I0914 22:47:04.411192   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.411669   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.411744   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.411647   47373 retry.go:31] will retry after 198.435142ms: waiting for machine to come up
	I0914 22:47:04.612435   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.612957   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.613025   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.612934   47373 retry.go:31] will retry after 350.950211ms: waiting for machine to come up
	I0914 22:47:04.965570   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.966332   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.966458   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.966377   47373 retry.go:31] will retry after 398.454996ms: waiting for machine to come up
	I0914 22:47:04.390295   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:04.393815   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394249   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:04.394282   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394543   46713 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:04.398850   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:04.411297   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:47:04.411363   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:04.443950   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:04.444023   46713 ssh_runner.go:195] Run: which lz4
	I0914 22:47:04.448422   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:47:04.453479   46713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:47:04.453505   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0914 22:47:03.686086   45954 pod_ready.go:92] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.686112   45954 pod_ready.go:81] duration metric: took 6.523915685s waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.686125   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692434   45954 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.692454   45954 pod_ready.go:81] duration metric: took 6.320818ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692466   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698065   45954 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.698088   45954 pod_ready.go:81] duration metric: took 5.613243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698100   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703688   45954 pod_ready.go:92] pod "kube-proxy-j2qmv" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.703706   45954 pod_ready.go:81] duration metric: took 5.599421ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703718   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708487   45954 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.708505   45954 pod_ready.go:81] duration metric: took 4.779322ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708516   45954 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:05.993620   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:07.475579   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.475617   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:07.475631   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:07.531335   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.531366   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:08.032057   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.039350   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.039384   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:08.531559   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.538857   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.538891   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:09.031899   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:09.037891   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:47:09.047398   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:47:09.047426   46412 api_server.go:131] duration metric: took 5.880732639s to wait for apiserver health ...
	I0914 22:47:09.047434   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:47:09.047440   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:09.049137   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:05.366070   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.366812   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.366844   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.366740   47373 retry.go:31] will retry after 471.857141ms: waiting for machine to come up
	I0914 22:47:05.840519   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.841198   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.841229   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.841150   47373 retry.go:31] will retry after 632.189193ms: waiting for machine to come up
	I0914 22:47:06.475175   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:06.475769   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:06.475800   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:06.475704   47373 retry.go:31] will retry after 866.407813ms: waiting for machine to come up
	I0914 22:47:07.344343   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:07.344865   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:07.344897   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:07.344815   47373 retry.go:31] will retry after 1.101301607s: waiting for machine to come up
	I0914 22:47:08.448452   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:08.449070   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:08.449111   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:08.449014   47373 retry.go:31] will retry after 995.314765ms: waiting for machine to come up
	I0914 22:47:09.446294   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:09.446708   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:09.446740   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:09.446653   47373 retry.go:31] will retry after 1.180552008s: waiting for machine to come up
	I0914 22:47:05.984485   46713 crio.go:444] Took 1.536109 seconds to copy over tarball
	I0914 22:47:05.984562   46713 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:47:09.247825   46713 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.263230608s)
	I0914 22:47:09.247858   46713 crio.go:451] Took 3.263345 seconds to extract the tarball
	I0914 22:47:09.247871   46713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:47:09.289821   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:09.340429   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:09.340463   46713 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:09.340544   46713 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.340568   46713 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0914 22:47:09.340535   46713 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.340531   46713 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.340789   46713 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.340811   46713 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.340886   46713 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.340793   46713 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.342655   46713 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.342658   46713 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.342636   46713 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.342635   46713 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.342793   46713 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.561063   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0914 22:47:09.564079   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.564246   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.564957   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.566014   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.571757   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.578469   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.687502   46713 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0914 22:47:09.687548   46713 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0914 22:47:09.687591   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.727036   46713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0914 22:47:09.727085   46713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.727140   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0914 22:47:09.737952   46713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0914 22:47:09.737986   46713 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.737990   46713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0914 22:47:09.738002   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738013   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738023   46713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.738063   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.744728   46713 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0914 22:47:09.744768   46713 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.744813   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753014   46713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0914 22:47:09.753055   46713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.753080   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753104   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.753056   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0914 22:47:09.753149   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.753193   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.753213   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.758372   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.758544   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.875271   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0914 22:47:09.875299   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0914 22:47:09.875357   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0914 22:47:09.875382   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.875404   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0914 22:47:09.876393   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0914 22:47:09.878339   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0914 22:47:09.878491   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0914 22:47:09.881457   46713 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0914 22:47:09.881475   46713 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.881521   46713 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0914 22:47:08.496805   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.993044   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:09.050966   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:09.061912   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:09.096783   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:09.111938   46412 system_pods.go:59] 8 kube-system pods found
	I0914 22:47:09.111976   46412 system_pods.go:61] "coredns-5dd5756b68-zrd8r" [5b5f18a0-d6ee-42f2-b31a-4f8555b50388] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:09.111988   46412 system_pods.go:61] "etcd-embed-certs-588699" [b32d61b5-8c3f-4980-9f0f-c08630be9c36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:47:09.112001   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [58ac976e-7a8c-4aee-9ee5-b92bd7e897b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:47:09.112015   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [3f9587f5-fe32-446a-a4c9-cb679b177937] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:47:09.112036   46412 system_pods.go:61] "kube-proxy-l8pq9" [4aecae33-dcd9-4ec6-a537-ecbb076c44d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:47:09.112052   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [f23ab185-f4c2-4e39-936d-51d51538b0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:47:09.112066   46412 system_pods.go:61] "metrics-server-57f55c9bc5-zvk82" [3c48277c-4604-4a83-82ea-2776cf0d0537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:47:09.112077   46412 system_pods.go:61] "storage-provisioner" [f0acbbe1-c326-4863-ae2e-d2d3e5be07c1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:47:09.112090   46412 system_pods.go:74] duration metric: took 15.280254ms to wait for pod list to return data ...
	I0914 22:47:09.112103   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:09.119686   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:09.119725   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:09.119747   46412 node_conditions.go:105] duration metric: took 7.637688ms to run NodePressure ...
	I0914 22:47:09.119768   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:09.407351   46412 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414338   46412 kubeadm.go:787] kubelet initialised
	I0914 22:47:09.414361   46412 kubeadm.go:788] duration metric: took 6.974234ms waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414369   46412 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:47:09.424482   46412 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:12.171133   46412 pod_ready.go:102] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.628919   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:10.629418   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:10.629449   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:10.629366   47373 retry.go:31] will retry after 1.486310454s: waiting for machine to come up
	I0914 22:47:12.117762   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:12.118350   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:12.118381   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:12.118295   47373 retry.go:31] will retry after 2.678402115s: waiting for machine to come up
	I0914 22:47:14.798599   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:14.799127   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:14.799160   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:14.799060   47373 retry.go:31] will retry after 2.724185493s: waiting for machine to come up
	I0914 22:47:10.647242   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:12.244764   46713 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.363213143s)
	I0914 22:47:12.244798   46713 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0914 22:47:12.244823   46713 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.013457524s)
	I0914 22:47:12.244888   46713 cache_images.go:92] LoadImages completed in 2.904411161s
	W0914 22:47:12.244978   46713 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0914 22:47:12.245070   46713 ssh_runner.go:195] Run: crio config
	I0914 22:47:12.328636   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:12.328663   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:12.328687   46713 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:12.328710   46713 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-930717 NodeName:old-k8s-version-930717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 22:47:12.328882   46713 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-930717"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-930717
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:12.328984   46713 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-930717 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:12.329062   46713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0914 22:47:12.339084   46713 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:12.339169   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:12.348354   46713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 22:47:12.369083   46713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:12.388242   46713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0914 22:47:12.407261   46713 ssh_runner.go:195] Run: grep 192.168.72.70	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:12.411055   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:12.425034   46713 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717 for IP: 192.168.72.70
	I0914 22:47:12.425070   46713 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:12.425236   46713 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:12.425283   46713 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:12.425372   46713 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.key
	I0914 22:47:12.425451   46713 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key.382dacf3
	I0914 22:47:12.425512   46713 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key
	I0914 22:47:12.425642   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:12.425671   46713 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:12.425685   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:12.425708   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:12.425732   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:12.425751   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:12.425789   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:12.426339   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:12.456306   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:47:12.486038   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:12.520941   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:47:12.552007   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:12.589620   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:12.619358   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:12.650395   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:12.678898   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:12.704668   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:12.730499   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:12.755286   46713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:12.773801   46713 ssh_runner.go:195] Run: openssl version
	I0914 22:47:12.781147   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:12.793953   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799864   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799922   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.806881   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:12.817936   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:12.830758   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836538   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836613   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.843368   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:12.855592   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:12.866207   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871317   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871368   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.878438   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:12.891012   46713 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:12.895887   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:12.902284   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:12.909482   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:12.916524   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:12.924045   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:12.929935   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:12.937292   46713 kubeadm.go:404] StartCluster: {Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:12.937417   46713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:12.937470   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:12.975807   46713 cri.go:89] found id: ""
	I0914 22:47:12.975902   46713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:12.988356   46713 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:12.988379   46713 kubeadm.go:636] restartCluster start
	I0914 22:47:12.988434   46713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:13.000294   46713 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.001492   46713 kubeconfig.go:92] found "old-k8s-version-930717" server: "https://192.168.72.70:8443"
	I0914 22:47:13.008583   46713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:13.023004   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.023065   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.037604   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.037625   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.037671   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.048939   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.549653   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.549746   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.561983   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.049481   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.049588   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.064694   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.549101   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.549195   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.564858   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:15.049112   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.049206   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.063428   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:12.993654   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:14.995358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:13.946979   46412 pod_ready.go:92] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:13.947004   46412 pod_ready.go:81] duration metric: took 4.522495708s waiting for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:13.947013   46412 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:15.968061   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:18.465595   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:17.526472   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:17.526915   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:17.526946   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:17.526867   47373 retry.go:31] will retry after 3.587907236s: waiting for machine to come up
	I0914 22:47:15.549179   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.549273   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.561977   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.049593   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.049678   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.063654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.549178   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.549248   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.561922   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.049041   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.049131   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.062442   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.550005   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.550066   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.561254   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.049855   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.049932   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.062226   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.549845   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.549941   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.561219   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.049739   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.049829   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.061225   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.550035   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.550112   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.561546   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:20.049979   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.050080   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.061478   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.489830   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:19.490802   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.490931   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.118871   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119369   45407 main.go:141] libmachine: (no-preload-344363) Found IP for machine: 192.168.39.60
	I0914 22:47:21.119391   45407 main.go:141] libmachine: (no-preload-344363) Reserving static IP address...
	I0914 22:47:21.119418   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has current primary IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119860   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.119888   45407 main.go:141] libmachine: (no-preload-344363) Reserved static IP address: 192.168.39.60
	I0914 22:47:21.119906   45407 main.go:141] libmachine: (no-preload-344363) DBG | skip adding static IP to network mk-no-preload-344363 - found existing host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"}
	I0914 22:47:21.119931   45407 main.go:141] libmachine: (no-preload-344363) DBG | Getting to WaitForSSH function...
	I0914 22:47:21.119949   45407 main.go:141] libmachine: (no-preload-344363) Waiting for SSH to be available...
	I0914 22:47:21.121965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122282   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.122312   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122392   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH client type: external
	I0914 22:47:21.122429   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa (-rw-------)
	I0914 22:47:21.122482   45407 main.go:141] libmachine: (no-preload-344363) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:21.122510   45407 main.go:141] libmachine: (no-preload-344363) DBG | About to run SSH command:
	I0914 22:47:21.122521   45407 main.go:141] libmachine: (no-preload-344363) DBG | exit 0
	I0914 22:47:21.206981   45407 main.go:141] libmachine: (no-preload-344363) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:21.207366   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetConfigRaw
	I0914 22:47:21.208066   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.210323   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210607   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.210639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210795   45407 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/config.json ...
	I0914 22:47:21.211016   45407 machine.go:88] provisioning docker machine ...
	I0914 22:47:21.211036   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:21.211258   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211431   45407 buildroot.go:166] provisioning hostname "no-preload-344363"
	I0914 22:47:21.211455   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211629   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.213574   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.213887   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.213921   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.214015   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.214181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214338   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.214648   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.215041   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.215056   45407 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-344363 && echo "no-preload-344363" | sudo tee /etc/hostname
	I0914 22:47:21.347323   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-344363
	
	I0914 22:47:21.347358   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.350445   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.350846   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.350882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.351144   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.351393   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351599   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351766   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.351944   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.352264   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.352291   45407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-344363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-344363/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-344363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:21.471619   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:21.471648   45407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:21.471671   45407 buildroot.go:174] setting up certificates
	I0914 22:47:21.471683   45407 provision.go:83] configureAuth start
	I0914 22:47:21.471696   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.472019   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.474639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475113   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.475141   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475293   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.477627   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.477976   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.478009   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.478148   45407 provision.go:138] copyHostCerts
	I0914 22:47:21.478189   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:21.478198   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:21.478249   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:21.478336   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:21.478344   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:21.478362   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:21.478416   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:21.478423   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:21.478439   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:21.478482   45407 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.no-preload-344363 san=[192.168.39.60 192.168.39.60 localhost 127.0.0.1 minikube no-preload-344363]
	I0914 22:47:21.546956   45407 provision.go:172] copyRemoteCerts
	I0914 22:47:21.547006   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:21.547029   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.549773   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550217   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.550257   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550468   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.550683   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.550850   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.551050   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:21.635939   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:21.656944   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:47:21.679064   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:21.701127   45407 provision.go:86] duration metric: configureAuth took 229.434247ms
	I0914 22:47:21.701147   45407 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:21.701319   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:47:21.701381   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.704100   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704475   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.704512   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704672   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.704865   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705046   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705218   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.705382   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.705828   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.705849   45407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:22.037291   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:22.037337   45407 machine.go:91] provisioned docker machine in 826.295956ms
	I0914 22:47:22.037350   45407 start.go:300] post-start starting for "no-preload-344363" (driver="kvm2")
	I0914 22:47:22.037363   45407 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:22.037396   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.037704   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:22.037729   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.040372   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040729   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.040757   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040896   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.041082   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.041266   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.041373   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.129612   45407 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:22.133522   45407 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:22.133550   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:22.133625   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:22.133715   45407 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:22.133844   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:22.142411   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:22.165470   45407 start.go:303] post-start completed in 128.106418ms
	I0914 22:47:22.165496   45407 fix.go:56] fixHost completed within 19.252903923s
	I0914 22:47:22.165524   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.168403   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168696   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.168731   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168894   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.169095   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169248   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169384   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.169571   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:22.169891   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:22.169904   45407 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:22.284038   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731642.258576336
	
	I0914 22:47:22.284062   45407 fix.go:206] guest clock: 1694731642.258576336
	I0914 22:47:22.284071   45407 fix.go:219] Guest: 2023-09-14 22:47:22.258576336 +0000 UTC Remote: 2023-09-14 22:47:22.16550191 +0000 UTC m=+357.203571663 (delta=93.074426ms)
	I0914 22:47:22.284107   45407 fix.go:190] guest clock delta is within tolerance: 93.074426ms
	I0914 22:47:22.284117   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 19.371563772s
	I0914 22:47:22.284146   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.284388   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:22.286809   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287091   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.287133   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287288   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287782   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287978   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.288050   45407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:22.288085   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.288176   45407 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:22.288197   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.290608   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.290936   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.290965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291067   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291157   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291345   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291516   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.291529   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.291554   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291649   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.291706   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291837   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291975   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.292158   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.417570   45407 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:22.423145   45407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:22.563752   45407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:22.569625   45407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:22.569718   45407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:22.585504   45407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:22.585527   45407 start.go:469] detecting cgroup driver to use...
	I0914 22:47:22.585610   45407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:22.599600   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:22.612039   45407 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:22.612080   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:22.624817   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:22.637141   45407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:22.744181   45407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:22.864420   45407 docker.go:212] disabling docker service ...
	I0914 22:47:22.864490   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:22.877360   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:22.888786   45407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:23.000914   45407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:23.137575   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:23.150682   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:23.167898   45407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:47:23.167966   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.176916   45407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:23.176991   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.185751   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.195260   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.204852   45407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:23.214303   45407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:23.222654   45407 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:23.222717   45407 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:23.235654   45407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:23.244081   45407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:23.357943   45407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:23.521315   45407 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:23.521410   45407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:23.526834   45407 start.go:537] Will wait 60s for crictl version
	I0914 22:47:23.526889   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:23.530250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:23.562270   45407 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:23.562358   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.606666   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.658460   45407 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:47:20.467600   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:20.964310   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.964331   46412 pod_ready.go:81] duration metric: took 7.017312906s waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.964349   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968539   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.968555   46412 pod_ready.go:81] duration metric: took 4.200242ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968563   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973180   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.973194   46412 pod_ready.go:81] duration metric: took 4.625123ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973206   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977403   46412 pod_ready.go:92] pod "kube-proxy-l8pq9" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.977418   46412 pod_ready.go:81] duration metric: took 4.206831ms waiting for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977425   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375236   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:22.375259   46412 pod_ready.go:81] duration metric: took 1.397826525s waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375271   46412 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:23.659885   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:23.662745   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663195   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:23.663228   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663452   45407 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:23.667637   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:23.678881   45407 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:47:23.678929   45407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:23.708267   45407 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:47:23.708309   45407 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:23.708390   45407 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.708421   45407 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.708424   45407 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0914 22:47:23.708437   45407 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.708425   45407 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.708537   45407 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.708403   45407 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.708393   45407 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.709903   45407 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.709887   45407 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.709899   45407 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.710189   45407 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.710260   45407 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0914 22:47:23.710346   45407 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.917134   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.929080   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.929396   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0914 22:47:23.935684   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.936236   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.937239   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.937622   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.006429   45407 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0914 22:47:24.006479   45407 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.006524   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.102547   45407 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0914 22:47:24.102597   45407 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.102641   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201012   45407 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0914 22:47:24.201050   45407 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.201100   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201106   45407 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0914 22:47:24.201138   45407 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.201156   45407 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0914 22:47:24.201203   45407 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.201227   45407 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0914 22:47:24.201282   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.201294   45407 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.201329   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201236   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201180   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.206295   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.263389   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0914 22:47:24.263451   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.263501   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.263513   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:24.263534   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0914 22:47:24.263573   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.263665   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.273844   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0914 22:47:24.273932   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:24.338823   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0914 22:47:24.338944   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:24.344560   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0914 22:47:24.344580   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0914 22:47:24.344594   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344635   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344659   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:24.344678   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0914 22:47:24.344723   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0914 22:47:24.344745   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0914 22:47:24.344816   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:24.346975   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0914 22:47:24.953835   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:20.549479   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.549585   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.563121   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.049732   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.049807   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.061447   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.549012   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.549073   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.561653   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.049517   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.049582   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.062280   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.549943   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.550017   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.562654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:23.024019   46713 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:23.024043   46713 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:23.024054   46713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:23.024101   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:23.060059   46713 cri.go:89] found id: ""
	I0914 22:47:23.060116   46713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:23.078480   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:23.087665   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:23.087714   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096513   46713 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096535   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:23.205072   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.081881   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.285041   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.364758   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.468127   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:24.468201   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:24.483354   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.007133   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.507231   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:23.992945   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.492600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:24.475872   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.978889   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.317110   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.97244294s)
	I0914 22:47:26.317145   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0914 22:47:26.317167   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317174   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.972489589s)
	I0914 22:47:26.317202   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0914 22:47:26.317215   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317248   45407 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.363386448s)
	I0914 22:47:26.317281   45407 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 22:47:26.317319   45407 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.317366   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:26.317213   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.972376756s)
	I0914 22:47:26.317426   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0914 22:47:28.397989   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (2.080744487s)
	I0914 22:47:28.398021   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0914 22:47:28.398031   45407 ssh_runner.go:235] Completed: which crictl: (2.080647539s)
	I0914 22:47:28.398048   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398093   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398095   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.006554   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:26.032232   46713 api_server.go:72] duration metric: took 1.564104415s to wait for apiserver process to appear ...
	I0914 22:47:26.032255   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:26.032270   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:28.992292   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.490442   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.033000   46713 api_server.go:269] stopped: https://192.168.72.70:8443/healthz: Get "https://192.168.72.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 22:47:31.033044   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:31.568908   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:31.568937   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:32.069915   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.080424   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.080456   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:32.570110   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.580879   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.580918   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:33.069247   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:33.077664   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:47:33.086933   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:47:33.086960   46713 api_server.go:131] duration metric: took 7.054699415s to wait for apiserver health ...
	I0914 22:47:33.086973   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:33.086981   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:33.088794   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:29.476304   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.975459   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:30.974281   45407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.57612291s)
	I0914 22:47:30.974347   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 22:47:30.974381   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.576263058s)
	I0914 22:47:30.974403   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0914 22:47:30.974427   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:30.974455   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:30.974470   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:33.737309   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.762815322s)
	I0914 22:47:33.737355   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0914 22:47:33.737379   45407 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.737322   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.762844826s)
	I0914 22:47:33.737464   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 22:47:33.737436   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.090357   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:33.103371   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:33.123072   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:33.133238   46713 system_pods.go:59] 7 kube-system pods found
	I0914 22:47:33.133268   46713 system_pods.go:61] "coredns-5644d7b6d9-8sbjk" [638464d2-96db-460d-bf82-0ee79df816da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:33.133278   46713 system_pods.go:61] "etcd-old-k8s-version-930717" [4b38f48a-fc4a-43d5-a2b4-414aff712c1b] Running
	I0914 22:47:33.133286   46713 system_pods.go:61] "kube-apiserver-old-k8s-version-930717" [523a3adc-8c68-4980-8a53-133476ce2488] Running
	I0914 22:47:33.133294   46713 system_pods.go:61] "kube-controller-manager-old-k8s-version-930717" [36fd7e01-4a5d-446f-8370-f7a7e886571c] Running
	I0914 22:47:33.133306   46713 system_pods.go:61] "kube-proxy-l4qz4" [c61d0471-0a9e-4662-b723-39944c8b3c31] Running
	I0914 22:47:33.133314   46713 system_pods.go:61] "kube-scheduler-old-k8s-version-930717" [f6d45807-c7f2-4545-b732-45dbd945c660] Running
	I0914 22:47:33.133323   46713 system_pods.go:61] "storage-provisioner" [2956bea1-80f8-4f61-a635-4332d4e3042e] Running
	I0914 22:47:33.133331   46713 system_pods.go:74] duration metric: took 10.233824ms to wait for pod list to return data ...
	I0914 22:47:33.133343   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:33.137733   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:33.137765   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:33.137776   46713 node_conditions.go:105] duration metric: took 4.42667ms to run NodePressure ...
	I0914 22:47:33.137795   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:33.590921   46713 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:33.597720   46713 retry.go:31] will retry after 159.399424ms: kubelet not initialised
	I0914 22:47:33.767747   46713 retry.go:31] will retry after 191.717885ms: kubelet not initialised
	I0914 22:47:33.967120   46713 retry.go:31] will retry after 382.121852ms: kubelet not initialised
	I0914 22:47:34.354106   46713 retry.go:31] will retry after 1.055800568s: kubelet not initialised
	I0914 22:47:35.413704   46713 retry.go:31] will retry after 1.341728619s: kubelet not initialised
	I0914 22:47:33.993188   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.491280   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:34.475254   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.977175   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.760804   46713 retry.go:31] will retry after 2.668611083s: kubelet not initialised
	I0914 22:47:39.434688   46713 retry.go:31] will retry after 2.1019007s: kubelet not initialised
	I0914 22:47:38.994051   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.490913   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:38.998980   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.474686   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:40.530763   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.793268381s)
	I0914 22:47:40.530793   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0914 22:47:40.530820   45407 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:40.530881   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:41.888277   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.357355595s)
	I0914 22:47:41.888305   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0914 22:47:41.888338   45407 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:41.888405   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:42.537191   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 22:47:42.537244   45407 cache_images.go:123] Successfully loaded all cached images
	I0914 22:47:42.537251   45407 cache_images.go:92] LoadImages completed in 18.828927203s
	I0914 22:47:42.537344   45407 ssh_runner.go:195] Run: crio config
	I0914 22:47:42.594035   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:47:42.594056   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:42.594075   45407 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:42.594098   45407 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-344363 NodeName:no-preload-344363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:47:42.594272   45407 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-344363"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:42.594383   45407 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-344363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:42.594449   45407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:47:42.604172   45407 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:42.604243   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:42.612570   45407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0914 22:47:42.628203   45407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:42.643625   45407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0914 22:47:42.658843   45407 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:42.661922   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:42.672252   45407 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363 for IP: 192.168.39.60
	I0914 22:47:42.672279   45407 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:42.672420   45407 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:42.672462   45407 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:42.672536   45407 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.key
	I0914 22:47:42.672630   45407 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key.a014e791
	I0914 22:47:42.672693   45407 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key
	I0914 22:47:42.672828   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:42.672867   45407 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:42.672879   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:42.672915   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:42.672948   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:42.672982   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:42.673044   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:42.673593   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:42.695080   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:47:42.716844   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:42.746475   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0914 22:47:42.769289   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:42.790650   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:42.811665   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:42.833241   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:42.853851   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:42.875270   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:42.896913   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:42.917370   45407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:42.934549   45407 ssh_runner.go:195] Run: openssl version
	I0914 22:47:42.939762   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:42.949829   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954155   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954204   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.959317   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:42.968463   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:42.979023   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983436   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983502   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.988655   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:42.998288   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:43.007767   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011865   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011940   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.016837   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:43.026372   45407 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:43.030622   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:43.036026   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:43.041394   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:43.046608   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:43.051675   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:43.056621   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:43.061552   45407 kubeadm.go:404] StartCluster: {Name:no-preload-344363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:43.061645   45407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:43.061700   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:43.090894   45407 cri.go:89] found id: ""
	I0914 22:47:43.090957   45407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:43.100715   45407 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:43.100732   45407 kubeadm.go:636] restartCluster start
	I0914 22:47:43.100782   45407 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:43.109233   45407 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.110217   45407 kubeconfig.go:92] found "no-preload-344363" server: "https://192.168.39.60:8443"
	I0914 22:47:43.112442   45407 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:43.120580   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.120619   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.131224   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.131238   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.131292   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.140990   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.641661   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.641753   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.653379   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.142002   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.142077   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.154194   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.641806   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.641931   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.653795   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:41.541334   46713 retry.go:31] will retry after 2.553142131s: kubelet not initialised
	I0914 22:47:44.100647   46713 retry.go:31] will retry after 6.538244211s: kubelet not initialised
	I0914 22:47:43.995757   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.490438   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:43.974300   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.474137   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:45.141728   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.141816   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.153503   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:45.641693   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.641775   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.653204   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.141748   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.141838   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.153035   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.641294   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.641386   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.653144   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.141813   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.141915   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.152408   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.641793   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.641872   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.653228   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.141212   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.141304   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.152568   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.641805   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.641881   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.652184   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.141839   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.141909   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.152921   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.642082   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.642160   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.656837   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.991209   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:51.492672   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:48.973567   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.974964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:52.975525   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.141324   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.141399   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.153003   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:50.642032   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.642113   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.653830   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.141403   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.141486   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.152324   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.641932   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.642027   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.653279   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.141928   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.141998   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.152653   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.641151   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.641239   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.652312   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:53.121389   45407 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:53.121422   45407 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:53.121436   45407 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:53.121511   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:53.150615   45407 cri.go:89] found id: ""
	I0914 22:47:53.150681   45407 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:53.164511   45407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:53.173713   45407 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:53.173778   45407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183776   45407 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183797   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:53.310974   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.230246   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.409237   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.474183   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.572433   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:54.572581   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:54.584938   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:50.644922   46713 retry.go:31] will retry after 11.248631638s: kubelet not initialised
	I0914 22:47:53.990630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.990661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.475037   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:57.475941   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.098638   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:55.599218   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.099188   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.598826   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.621701   45407 api_server.go:72] duration metric: took 2.049267478s to wait for apiserver process to appear ...
	I0914 22:47:56.621729   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:56.621749   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622263   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:56.622301   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622682   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:57.123404   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.433050   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:48:00.433082   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:48:00.433096   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.467030   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.467073   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:00.623319   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.633882   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.633912   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.123559   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.128661   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.128691   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.623201   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.629775   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.629804   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:02.123439   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:02.131052   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:48:02.141185   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:48:02.141213   45407 api_server.go:131] duration metric: took 5.519473898s to wait for apiserver health ...
	I0914 22:48:02.141222   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:48:02.141228   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:48:02.143254   45407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:57.992038   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:59.992600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:02.144756   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:48:02.158230   45407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:48:02.182382   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:48:02.204733   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:48:02.204786   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:48:02.204801   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:48:02.204817   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:48:02.204834   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:48:02.204847   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:48:02.204859   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:48:02.204876   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:48:02.204887   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:48:02.204900   45407 system_pods.go:74] duration metric: took 22.491699ms to wait for pod list to return data ...
	I0914 22:48:02.204913   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:48:02.208661   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:48:02.208692   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:48:02.208706   45407 node_conditions.go:105] duration metric: took 3.7844ms to run NodePressure ...
	I0914 22:48:02.208731   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:48:02.454257   45407 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458848   45407 kubeadm.go:787] kubelet initialised
	I0914 22:48:02.458868   45407 kubeadm.go:788] duration metric: took 4.585034ms waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458874   45407 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:02.464634   45407 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.471350   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471371   45407 pod_ready.go:81] duration metric: took 6.714087ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.471379   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471387   45407 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.476977   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.476998   45407 pod_ready.go:81] duration metric: took 5.604627ms waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.477009   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.477019   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.483218   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483236   45407 pod_ready.go:81] duration metric: took 6.211697ms waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.483244   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483256   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.589184   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589217   45407 pod_ready.go:81] duration metric: took 105.950074ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.589227   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589236   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.987051   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987081   45407 pod_ready.go:81] duration metric: took 397.836385ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.987094   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987103   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.392835   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392865   45407 pod_ready.go:81] duration metric: took 405.754351ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.392876   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392886   45407 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.786615   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786641   45407 pod_ready.go:81] duration metric: took 393.746366ms waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.786652   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786660   45407 pod_ready.go:38] duration metric: took 1.327778716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:03.786676   45407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:48:03.798081   45407 ops.go:34] apiserver oom_adj: -16
	I0914 22:48:03.798101   45407 kubeadm.go:640] restartCluster took 20.697363165s
	I0914 22:48:03.798107   45407 kubeadm.go:406] StartCluster complete in 20.736562339s
	I0914 22:48:03.798121   45407 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.798193   45407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:48:03.799954   45407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.800200   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:48:03.800299   45407 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:48:03.800368   45407 addons.go:69] Setting storage-provisioner=true in profile "no-preload-344363"
	I0914 22:48:03.800449   45407 addons.go:231] Setting addon storage-provisioner=true in "no-preload-344363"
	W0914 22:48:03.800462   45407 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:48:03.800511   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800394   45407 addons.go:69] Setting metrics-server=true in profile "no-preload-344363"
	I0914 22:48:03.800543   45407 addons.go:231] Setting addon metrics-server=true in "no-preload-344363"
	W0914 22:48:03.800558   45407 addons.go:240] addon metrics-server should already be in state true
	I0914 22:48:03.800590   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800388   45407 addons.go:69] Setting default-storageclass=true in profile "no-preload-344363"
	I0914 22:48:03.800633   45407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-344363"
	I0914 22:48:03.800411   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:48:03.800906   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800909   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800944   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.801011   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.801054   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.800968   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.804911   45407 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-344363" context rescaled to 1 replicas
	I0914 22:48:03.804946   45407 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:48:03.807503   45407 out.go:177] * Verifying Kubernetes components...
	I0914 22:47:59.973913   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:01.974625   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:03.808768   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:48:03.816774   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0914 22:48:03.816773   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0914 22:48:03.817265   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817518   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817791   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.817821   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818011   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.818032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818223   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818407   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818431   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.818976   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.819027   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.829592   45407 addons.go:231] Setting addon default-storageclass=true in "no-preload-344363"
	W0914 22:48:03.829614   45407 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:48:03.829641   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.830013   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.830047   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.835514   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0914 22:48:03.835935   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.836447   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.836473   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.836841   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.837011   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.838909   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.843677   45407 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:48:03.845231   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:48:03.845246   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:48:03.845261   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.844291   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I0914 22:48:03.845685   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.846224   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.846242   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.846572   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.847073   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.847103   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.847332   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0914 22:48:03.848400   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.848666   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849160   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.849182   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.849263   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.849283   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849314   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.849461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.849570   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.849635   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.849682   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.850555   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.850585   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.863035   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0914 22:48:03.863559   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864010   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.864204   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0914 22:48:03.864478   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.864526   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864752   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.864936   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864955   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.865261   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.865489   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.866474   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.868300   45407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:48:03.867504   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.869841   45407 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:03.869855   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:48:03.869874   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.870067   45407 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:03.870078   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:48:03.870091   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.873462   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.873859   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.873882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874026   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874114   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.874287   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.874397   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.874903   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874949   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.874980   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.875135   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.875301   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.875486   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.956934   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:48:03.956956   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:48:03.973872   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:48:03.973896   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:48:04.002028   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.002051   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:48:04.018279   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:04.037990   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:04.047125   45407 node_ready.go:35] waiting up to 6m0s for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:04.047292   45407 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:48:04.086299   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.991926   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.991952   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992225   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992292   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992324   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992342   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992364   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992614   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992634   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992649   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992657   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992665   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992914   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992933   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:01.898769   46713 retry.go:31] will retry after 9.475485234s: kubelet not initialised
	I0914 22:48:05.528027   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.490009157s)
	I0914 22:48:05.528078   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528087   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528435   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528457   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528470   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528436   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.528481   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528802   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528824   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528829   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.600274   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.51392997s)
	I0914 22:48:05.600338   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600351   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.600645   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.600670   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.600682   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600695   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.602502   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.602513   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.602524   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.602546   45407 addons.go:467] Verifying addon metrics-server=true in "no-preload-344363"
	I0914 22:48:05.604330   45407 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 22:48:02.491577   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.995014   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.474529   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:06.474964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:05.605648   45407 addons.go:502] enable addons completed in 1.805353931s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 22:48:06.198114   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:08.199023   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:07.490770   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:09.991693   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:08.974469   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:11.474711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:10.698198   45407 node_ready.go:49] node "no-preload-344363" has status "Ready":"True"
	I0914 22:48:10.698218   45407 node_ready.go:38] duration metric: took 6.651066752s waiting for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:10.698227   45407 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:10.704694   45407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710103   45407 pod_ready.go:92] pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:10.710119   45407 pod_ready.go:81] duration metric: took 5.400404ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710128   45407 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.747445   45407 pod_ready.go:102] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.229927   45407 pod_ready.go:92] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:13.229953   45407 pod_ready.go:81] duration metric: took 2.519818297s waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:13.229966   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747126   45407 pod_ready.go:92] pod "kube-apiserver-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.747147   45407 pod_ready.go:81] duration metric: took 1.51717338s waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747157   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752397   45407 pod_ready.go:92] pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.752413   45407 pod_ready.go:81] duration metric: took 5.250049ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752420   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.380752   46713 kubeadm.go:787] kubelet initialised
	I0914 22:48:11.380783   46713 kubeadm.go:788] duration metric: took 37.789831498s waiting for restarted kubelet to initialise ...
	I0914 22:48:11.380793   46713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:11.386189   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392948   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.392970   46713 pod_ready.go:81] duration metric: took 6.75113ms waiting for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392981   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398606   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.398627   46713 pod_ready.go:81] duration metric: took 5.638835ms waiting for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398639   46713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404145   46713 pod_ready.go:92] pod "etcd-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.404174   46713 pod_ready.go:81] duration metric: took 5.527173ms waiting for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404187   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409428   46713 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.409448   46713 pod_ready.go:81] duration metric: took 5.252278ms waiting for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409461   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779225   46713 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.779252   46713 pod_ready.go:81] duration metric: took 369.782336ms waiting for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779267   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179256   46713 pod_ready.go:92] pod "kube-proxy-l4qz4" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.179277   46713 pod_ready.go:81] duration metric: took 400.003039ms waiting for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179286   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578889   46713 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.578921   46713 pod_ready.go:81] duration metric: took 399.627203ms waiting for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578935   46713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:12.491274   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:14.991146   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.991799   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.974725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.473917   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.474722   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:15.099588   45407 pod_ready.go:92] pod "kube-proxy-zzkbp" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.099612   45407 pod_ready.go:81] duration metric: took 347.18498ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.099623   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498642   45407 pod_ready.go:92] pod "kube-scheduler-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.498664   45407 pod_ready.go:81] duration metric: took 399.034277ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498678   45407 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:17.806138   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.887157   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:19.390361   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.991911   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.993133   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.474578   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.305450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:22.305521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:24.306131   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:21.885143   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.886722   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.490126   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.991185   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.974547   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.473850   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.805651   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.306125   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.384992   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.385266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.385877   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:27.991827   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.991995   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.475603   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.974568   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:31.806483   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.306121   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.886341   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.385506   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.488948   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.490950   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.989621   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.474815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.973407   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.806806   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.806988   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.886043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.386865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.991151   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:41.491384   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:39.974109   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.473010   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.808362   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.305126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.886094   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.386710   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.991121   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.992500   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:44.475120   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:46.973837   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.305212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.305740   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.806334   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.886380   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.887578   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:48.490416   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:50.990196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.474209   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.474657   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.808853   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.305742   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.888488   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.385591   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:52.990333   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.991549   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:53.974301   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:55.976250   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.474372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.807759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.304597   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.885164   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.885809   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:57.491267   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.492043   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.991231   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:00.974064   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:02.975136   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.808275   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.385492   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.385865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:05.386266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.992513   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.490253   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:04.975537   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.473413   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.306066   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.805711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.886495   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.386100   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.995545   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.490960   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:09.476367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.974480   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.807870   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.306759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:12.386166   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.990090   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.489864   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.975102   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.474761   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.475314   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:15.809041   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.305700   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:17.385490   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:19.386201   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.490727   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.493813   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.973383   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.973978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.306906   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.805781   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.806417   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:21.387171   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:23.394663   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.989981   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.998602   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.975048   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.473804   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.805993   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:25.886256   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:28.385307   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:30.386473   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.490860   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.991665   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.992373   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.475815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.973092   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.305648   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.806797   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.886577   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.386203   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.490086   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:36.490465   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:33.973662   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.974041   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.473275   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.306848   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.806295   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.388154   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.886447   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.490850   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.989734   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.473543   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.473711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:41.807197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.305572   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.385788   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.386844   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.995794   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:45.490630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.474251   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.974425   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.306070   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.805530   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.886095   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.888504   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:47.491269   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.990921   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.474354   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.973552   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:50.806526   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.807021   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.385411   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.385825   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.490166   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:54.991982   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.974372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:56.473350   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.305863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.306450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.308315   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.886560   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.886950   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.386043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.490604   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.490811   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.993715   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:58.973152   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.975078   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.474589   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.806409   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.806552   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:02.387458   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.886066   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.490551   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:06.490632   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.974290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.974714   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.810256   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.305443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.386252   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:09.887808   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.490994   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.990417   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.474207   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.973759   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.305662   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.807626   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.385387   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.386055   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.991196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.489856   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.974362   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.474890   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.305348   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.306521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.306661   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:16.386682   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:18.386805   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.491969   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.990884   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.991904   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.476052   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.973290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.806863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.810113   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:20.886118   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.388653   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:24.490861   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.991437   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.474556   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.307894   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.809126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:25.885409   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:27.886080   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.386151   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:29.489358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.491041   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.973725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.975342   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.474590   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.306171   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.307126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:32.386190   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:34.886414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.491383   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.492155   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.974978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:38.473506   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.307221   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.806174   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.386235   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.886579   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.990447   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.991649   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.474117   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.973778   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.308130   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.806411   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.807765   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.385199   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.387102   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.491019   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.993076   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.974689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.473863   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.305509   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.305825   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:46.885280   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.385189   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.491661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.989457   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.991512   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.973709   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.976112   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.306459   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.805441   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.386498   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.887424   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.492074   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.989668   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.473073   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.473689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.474597   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:55.806711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.305434   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.386640   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.885298   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.995348   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:01.491262   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.974371   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.474367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.305803   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.806120   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:04.807184   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.886357   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.887274   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:05.386976   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.708637   45954 pod_ready.go:81] duration metric: took 4m0.000105295s waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:03.708672   45954 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:03.708681   45954 pod_ready.go:38] duration metric: took 4m6.567418041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:03.708699   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:51:03.708739   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:03.708804   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:03.759664   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:03.759688   45954 cri.go:89] found id: ""
	I0914 22:51:03.759697   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:03.759753   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.764736   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:03.764789   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:03.800251   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:03.800280   45954 cri.go:89] found id: ""
	I0914 22:51:03.800290   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:03.800341   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.804761   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:03.804818   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:03.847136   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:03.847162   45954 cri.go:89] found id: ""
	I0914 22:51:03.847172   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:03.847215   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.851253   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:03.851325   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:03.882629   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:03.882654   45954 cri.go:89] found id: ""
	I0914 22:51:03.882664   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:03.882713   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.887586   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:03.887642   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:03.916702   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:03.916723   45954 cri.go:89] found id: ""
	I0914 22:51:03.916730   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:03.916773   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.921172   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:03.921232   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:03.950593   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:03.950618   45954 cri.go:89] found id: ""
	I0914 22:51:03.950628   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:03.950689   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.954303   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:03.954366   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:03.982565   45954 cri.go:89] found id: ""
	I0914 22:51:03.982588   45954 logs.go:284] 0 containers: []
	W0914 22:51:03.982597   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:03.982604   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:03.982662   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:04.011932   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.011957   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:04.011964   45954 cri.go:89] found id: ""
	I0914 22:51:04.011972   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:04.012026   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.016091   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.019830   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:04.019852   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:04.061469   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:04.061494   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:04.092823   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:04.092846   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:04.156150   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:04.156190   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:04.169879   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:04.169920   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:04.226165   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:04.226198   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.255658   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:04.255692   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:04.299368   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:04.299401   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:04.440433   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:04.440467   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:04.477396   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:04.477425   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:04.513399   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:04.513431   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:05.016889   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:05.016925   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:05.067712   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:05.067749   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:05.973423   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.973637   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.307754   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.805419   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.389465   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.885150   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.597529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:51:07.614053   45954 api_server.go:72] duration metric: took 4m15.435815174s to wait for apiserver process to appear ...
	I0914 22:51:07.614076   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:51:07.614106   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:07.614155   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:07.643309   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:07.643333   45954 cri.go:89] found id: ""
	I0914 22:51:07.643342   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:07.643411   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.647434   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:07.647511   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:07.676943   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:07.676959   45954 cri.go:89] found id: ""
	I0914 22:51:07.676966   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:07.677006   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.681053   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:07.681101   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:07.714710   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:07.714736   45954 cri.go:89] found id: ""
	I0914 22:51:07.714745   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:07.714807   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.718900   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:07.718966   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:07.754786   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:07.754808   45954 cri.go:89] found id: ""
	I0914 22:51:07.754815   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:07.754867   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.759623   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:07.759693   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:07.794366   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:07.794389   45954 cri.go:89] found id: ""
	I0914 22:51:07.794398   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:07.794457   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.798717   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:07.798777   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:07.831131   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:07.831158   45954 cri.go:89] found id: ""
	I0914 22:51:07.831167   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:07.831227   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.835696   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:07.835762   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:07.865802   45954 cri.go:89] found id: ""
	I0914 22:51:07.865831   45954 logs.go:284] 0 containers: []
	W0914 22:51:07.865841   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:07.865849   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:07.865905   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:07.895025   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:07.895049   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:07.895056   45954 cri.go:89] found id: ""
	I0914 22:51:07.895064   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:07.895118   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.899230   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.903731   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:07.903751   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:08.033922   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:08.033952   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:08.068784   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:08.068812   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:08.120395   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:08.120428   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:08.133740   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:08.133763   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:08.173288   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:08.173320   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:08.203964   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:08.203988   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:08.732327   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:08.732367   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:08.784110   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:08.784150   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:08.819179   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:08.819210   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:08.866612   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:08.866644   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:08.900892   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:08.900939   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:08.950563   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:08.950593   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:11.505428   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:51:11.511228   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:51:11.512855   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:51:11.512881   45954 api_server.go:131] duration metric: took 3.898798182s to wait for apiserver health ...
	I0914 22:51:11.512891   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:51:11.512911   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:11.512954   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:11.544538   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:11.544563   45954 cri.go:89] found id: ""
	I0914 22:51:11.544573   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:11.544629   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.548878   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:11.548946   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:11.578439   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:11.578464   45954 cri.go:89] found id: ""
	I0914 22:51:11.578473   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:11.578531   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.582515   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:11.582576   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:11.611837   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:11.611857   45954 cri.go:89] found id: ""
	I0914 22:51:11.611863   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:11.611917   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.615685   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:11.615744   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:11.645850   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:11.645869   45954 cri.go:89] found id: ""
	I0914 22:51:11.645876   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:11.645914   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.649995   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:11.650048   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:11.683515   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:11.683541   45954 cri.go:89] found id: ""
	I0914 22:51:11.683550   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:11.683604   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.687715   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:11.687806   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:11.721411   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.721428   45954 cri.go:89] found id: ""
	I0914 22:51:11.721434   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:11.721477   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.725801   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:11.725859   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:11.760391   45954 cri.go:89] found id: ""
	I0914 22:51:11.760417   45954 logs.go:284] 0 containers: []
	W0914 22:51:11.760427   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:11.760437   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:11.760498   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:11.792140   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.792162   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:11.792168   45954 cri.go:89] found id: ""
	I0914 22:51:11.792175   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:11.792234   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.796600   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.800888   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:11.800912   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:11.863075   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:11.863106   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:11.877744   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:11.877775   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.930381   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:11.930418   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.961471   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:11.961497   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:12.005391   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:12.005417   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:12.034742   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:12.034771   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:12.064672   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:12.064702   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:12.095801   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:12.095834   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:12.124224   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:12.124260   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:09.974433   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.975389   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.806380   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.807443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:12.657331   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:12.657375   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:12.718197   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:12.718227   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:12.845353   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:12.845381   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:15.395502   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:51:15.395524   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.395529   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.395534   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.395540   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.395544   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.395548   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.395554   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.395559   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.395565   45954 system_pods.go:74] duration metric: took 3.882669085s to wait for pod list to return data ...
	I0914 22:51:15.395572   45954 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:51:15.398128   45954 default_sa.go:45] found service account: "default"
	I0914 22:51:15.398148   45954 default_sa.go:55] duration metric: took 2.571314ms for default service account to be created ...
	I0914 22:51:15.398155   45954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:51:15.407495   45954 system_pods.go:86] 8 kube-system pods found
	I0914 22:51:15.407517   45954 system_pods.go:89] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.407522   45954 system_pods.go:89] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.407527   45954 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.407532   45954 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.407535   45954 system_pods.go:89] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.407540   45954 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.407549   45954 system_pods.go:89] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.407558   45954 system_pods.go:89] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.407576   45954 system_pods.go:126] duration metric: took 9.409452ms to wait for k8s-apps to be running ...
	I0914 22:51:15.407587   45954 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:51:15.407633   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:15.424728   45954 system_svc.go:56] duration metric: took 17.122868ms WaitForService to wait for kubelet.
	I0914 22:51:15.424754   45954 kubeadm.go:581] duration metric: took 4m23.246518879s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:51:15.424794   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:51:15.428492   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:51:15.428520   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:51:15.428534   45954 node_conditions.go:105] duration metric: took 3.733977ms to run NodePressure ...
	I0914 22:51:15.428550   45954 start.go:228] waiting for startup goroutines ...
	I0914 22:51:15.428563   45954 start.go:233] waiting for cluster config update ...
	I0914 22:51:15.428576   45954 start.go:242] writing updated cluster config ...
	I0914 22:51:15.428887   45954 ssh_runner.go:195] Run: rm -f paused
	I0914 22:51:15.479711   45954 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:51:15.482387   45954 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799144" cluster and "default" namespace by default
	I0914 22:51:11.885968   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.887391   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:14.474188   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.974056   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.306146   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.806037   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.386306   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.386406   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:19.474446   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:21.474860   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.375841   46412 pod_ready.go:81] duration metric: took 4m0.000552226s waiting for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:22.375872   46412 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:22.375890   46412 pod_ready.go:38] duration metric: took 4m12.961510371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:22.375915   46412 kubeadm.go:640] restartCluster took 4m33.075347594s
	W0914 22:51:22.375983   46412 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:51:22.376022   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:51:20.806249   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.807141   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:24.809235   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:20.888098   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:23.386482   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:25.386542   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.305114   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:29.306240   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.886695   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:30.385740   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:31.306508   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:33.306655   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:32.886111   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.384925   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.805992   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:38.307801   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:37.385344   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:39.888303   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:40.806212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:43.305815   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:42.388414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:44.388718   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:45.306197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:47.806983   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:49.807150   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:46.885737   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:48.885794   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.115476   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.73941793s)
	I0914 22:51:53.115549   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:53.128821   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:51:53.137267   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:51:53.145533   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:51:53.145569   46412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 22:51:53.202279   46412 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 22:51:53.202501   46412 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:51:53.353512   46412 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:51:53.353674   46412 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:51:53.353816   46412 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:51:53.513428   46412 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:51:53.515450   46412 out.go:204]   - Generating certificates and keys ...
	I0914 22:51:53.515574   46412 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:51:53.515672   46412 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:51:53.515785   46412 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:51:53.515896   46412 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:51:53.516234   46412 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:51:53.516841   46412 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:51:53.517488   46412 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:51:53.517974   46412 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:51:53.518563   46412 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:51:53.519109   46412 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:51:53.519728   46412 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:51:53.519809   46412 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:51:53.641517   46412 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:51:53.842920   46412 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:51:53.982500   46412 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:51:54.065181   46412 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:51:54.065678   46412 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:51:54.071437   46412 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:51:52.305643   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.305996   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:51.386246   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.386956   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.073206   46412 out.go:204]   - Booting up control plane ...
	I0914 22:51:54.073363   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:51:54.073470   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:51:54.073554   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:51:54.091178   46412 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:51:54.091289   46412 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:51:54.091371   46412 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 22:51:54.221867   46412 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:51:56.306473   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:58.306953   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:55.886624   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:57.887222   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:00.385756   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.225144   46412 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002879 seconds
	I0914 22:52:02.225314   46412 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:02.244705   46412 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:02.778808   46412 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:02.779047   46412 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-588699 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 22:52:03.296381   46412 kubeadm.go:322] [bootstrap-token] Using token: x2l9oo.p0a5g5jx49srzji3
	I0914 22:52:03.297976   46412 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:03.298091   46412 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:03.308475   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:52:03.319954   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:03.325968   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:03.330375   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:03.334686   46412 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:03.353185   46412 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:52:03.622326   46412 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:03.721359   46412 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:03.721385   46412 kubeadm.go:322] 
	I0914 22:52:03.721472   46412 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:03.721486   46412 kubeadm.go:322] 
	I0914 22:52:03.721589   46412 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:03.721602   46412 kubeadm.go:322] 
	I0914 22:52:03.721623   46412 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:03.721678   46412 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:03.721727   46412 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:03.721764   46412 kubeadm.go:322] 
	I0914 22:52:03.721856   46412 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 22:52:03.721867   46412 kubeadm.go:322] 
	I0914 22:52:03.721945   46412 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 22:52:03.721954   46412 kubeadm.go:322] 
	I0914 22:52:03.722029   46412 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:03.722137   46412 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:03.722240   46412 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:03.722250   46412 kubeadm.go:322] 
	I0914 22:52:03.722367   46412 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:52:03.722468   46412 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:03.722479   46412 kubeadm.go:322] 
	I0914 22:52:03.722583   46412 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.722694   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:03.722719   46412 kubeadm.go:322] 	--control-plane 
	I0914 22:52:03.722752   46412 kubeadm.go:322] 
	I0914 22:52:03.722887   46412 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:03.722912   46412 kubeadm.go:322] 
	I0914 22:52:03.723031   46412 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.723170   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:03.724837   46412 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:03.724867   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:52:03.724879   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:03.726645   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:03.728115   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:03.741055   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:03.811746   46412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:03.811823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=embed-certs-588699 minikube.k8s.io/updated_at=2023_09_14T22_52_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:03.811827   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:00.805633   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.805831   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.807503   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.885499   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.886940   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.097721   46412 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:04.097763   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.186240   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.773886   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.273494   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.773993   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.274084   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.773309   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.273666   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.773916   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.274226   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.774073   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.807538   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.306062   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:06.886980   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.385212   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.274041   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:09.773409   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.274272   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.774321   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.274268   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.774250   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.273823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.774000   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.273596   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.774284   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.806015   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:14.308997   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:11.386087   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:12.580003   46713 pod_ready.go:81] duration metric: took 4m0.001053291s waiting for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:12.580035   46713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:12.580062   46713 pod_ready.go:38] duration metric: took 4m1.199260232s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:12.580089   46713 kubeadm.go:640] restartCluster took 4m59.591702038s
	W0914 22:52:12.580145   46713 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:52:12.580169   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:52:14.274174   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:14.773472   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.273376   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.773286   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.273920   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.773334   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.926033   46412 kubeadm.go:1081] duration metric: took 13.114277677s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:16.926076   46412 kubeadm.go:406] StartCluster complete in 5m27.664586228s
	I0914 22:52:16.926099   46412 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.926229   46412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:16.928891   46412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.929177   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:16.929313   46412 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:16.929393   46412 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-588699"
	I0914 22:52:16.929408   46412 addons.go:69] Setting default-storageclass=true in profile "embed-certs-588699"
	I0914 22:52:16.929423   46412 addons.go:69] Setting metrics-server=true in profile "embed-certs-588699"
	I0914 22:52:16.929435   46412 addons.go:231] Setting addon metrics-server=true in "embed-certs-588699"
	W0914 22:52:16.929446   46412 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:16.929446   46412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-588699"
	I0914 22:52:16.929475   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:52:16.929508   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929418   46412 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-588699"
	W0914 22:52:16.929533   46412 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:16.929574   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929907   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929938   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929939   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929963   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929968   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929995   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.948975   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41151
	I0914 22:52:16.948990   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0914 22:52:16.948977   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0914 22:52:16.949953   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950006   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.949957   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950601   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950607   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950620   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950626   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950632   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950647   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.951159   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951191   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951410   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951808   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951829   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.951896   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951906   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.951911   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.961182   46412 addons.go:231] Setting addon default-storageclass=true in "embed-certs-588699"
	W0914 22:52:16.961207   46412 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:16.961236   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.961615   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.961637   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.976517   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0914 22:52:16.976730   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0914 22:52:16.977005   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977161   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977448   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977466   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977564   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977589   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977781   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977913   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977966   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.978108   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.980084   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.980429   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.982113   46412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:16.983227   46412 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:16.984383   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:16.984394   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:16.984407   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.983307   46412 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:16.984439   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:16.984455   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.987850   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0914 22:52:16.987989   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988270   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.988771   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.988788   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.988849   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.988867   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988894   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.989058   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.989528   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.989748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.990151   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.990172   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.990441   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:16.990597   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.990766   46412 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-588699" context rescaled to 1 replicas
	I0914 22:52:16.990794   46412 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:16.992351   46412 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:16.990986   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.991129   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.994003   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:16.994015   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.994097   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.994300   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.994607   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.007652   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0914 22:52:17.008127   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:17.008676   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:17.008699   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:17.009115   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:17.009301   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:17.010905   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:17.011169   46412 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.011183   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:17.011201   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:17.014427   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.014837   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:17.014865   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.015132   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:17.015299   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:17.015435   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:17.015585   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.124720   46412 node_ready.go:35] waiting up to 6m0s for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.124831   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:17.128186   46412 node_ready.go:49] node "embed-certs-588699" has status "Ready":"True"
	I0914 22:52:17.128211   46412 node_ready.go:38] duration metric: took 3.459847ms waiting for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.128221   46412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.133021   46412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138574   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.138594   46412 pod_ready.go:81] duration metric: took 5.550933ms waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138605   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151548   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.151569   46412 pod_ready.go:81] duration metric: took 12.956129ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151581   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169368   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.169393   46412 pod_ready.go:81] duration metric: took 17.803681ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169406   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.180202   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:17.180227   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:17.184052   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:17.227381   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:17.227411   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:17.233773   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.293762   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:17.293788   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:17.328911   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.328934   46412 pod_ready.go:81] duration metric: took 159.520585ms waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.328942   46412 pod_ready.go:38] duration metric: took 200.709608ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.328958   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:17.329008   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:17.379085   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:18.947663   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.822786746s)
	I0914 22:52:18.947705   46412 start.go:917] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:19.171809   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.937996858s)
	I0914 22:52:19.171861   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171872   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.98779094s)
	I0914 22:52:19.171908   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171927   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171878   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171875   46412 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.842825442s)
	I0914 22:52:19.172234   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172277   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172292   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172289   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172307   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172322   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172352   46412 api_server.go:72] duration metric: took 2.181532709s to wait for apiserver process to appear ...
	I0914 22:52:19.172322   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172369   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.172377   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172387   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172396   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172410   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:52:19.172625   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172643   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172657   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172667   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172688   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172715   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172723   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172955   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172969   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.173012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.205041   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:52:19.209533   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:19.209561   46412 api_server.go:131] duration metric: took 37.185195ms to wait for apiserver health ...
	I0914 22:52:19.209573   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:19.225866   46412 system_pods.go:59] 7 kube-system pods found
	I0914 22:52:19.225893   46412 system_pods.go:61] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.225900   46412 system_pods.go:61] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.225908   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.225915   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.225921   46412 system_pods.go:61] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.225928   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.225934   46412 system_pods.go:61] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending
	I0914 22:52:19.225947   46412 system_pods.go:74] duration metric: took 16.366454ms to wait for pod list to return data ...
	I0914 22:52:19.225958   46412 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:19.232176   46412 default_sa.go:45] found service account: "default"
	I0914 22:52:19.232202   46412 default_sa.go:55] duration metric: took 6.234795ms for default service account to be created ...
	I0914 22:52:19.232221   46412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:19.238383   46412 system_pods.go:86] 7 kube-system pods found
	I0914 22:52:19.238415   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.238426   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.238433   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.238442   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.238448   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.238454   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.238463   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.238486   46412 retry.go:31] will retry after 271.864835ms: missing components: kube-dns
	I0914 22:52:19.431792   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.052667289s)
	I0914 22:52:19.431858   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.431875   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432217   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432254   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432265   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432277   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.432291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432561   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432615   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432626   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432637   46412 addons.go:467] Verifying addon metrics-server=true in "embed-certs-588699"
	I0914 22:52:19.434406   46412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:15.499654   45407 pod_ready.go:81] duration metric: took 4m0.00095032s waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:15.499683   45407 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:15.499692   45407 pod_ready.go:38] duration metric: took 4m4.80145633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:15.499709   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:15.499741   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:15.499821   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:15.551531   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:15.551573   45407 cri.go:89] found id: ""
	I0914 22:52:15.551584   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:15.551638   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.555602   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:15.555649   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:15.583476   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:15.583497   45407 cri.go:89] found id: ""
	I0914 22:52:15.583504   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:15.583541   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.587434   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:15.587499   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:15.614791   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:15.614813   45407 cri.go:89] found id: ""
	I0914 22:52:15.614821   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:15.614865   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.618758   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:15.618813   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:15.651772   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:15.651798   45407 cri.go:89] found id: ""
	I0914 22:52:15.651807   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:15.651862   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.656464   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:15.656533   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:15.701258   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:15.701289   45407 cri.go:89] found id: ""
	I0914 22:52:15.701299   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:15.701359   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.705980   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:15.706049   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:15.741616   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:15.741640   45407 cri.go:89] found id: ""
	I0914 22:52:15.741647   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:15.741702   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.745863   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:15.745913   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:15.779362   45407 cri.go:89] found id: ""
	I0914 22:52:15.779385   45407 logs.go:284] 0 containers: []
	W0914 22:52:15.779395   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:15.779403   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:15.779462   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:15.815662   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:15.815691   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.815698   45407 cri.go:89] found id: ""
	I0914 22:52:15.815707   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:15.815781   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.820879   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.826312   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:15.826338   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.864143   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:15.864175   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:16.401646   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:16.401689   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:16.442964   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:16.443000   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:16.612411   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:16.612444   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:16.664620   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:16.664652   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:16.702405   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:16.702432   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:16.738583   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:16.738615   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:16.752752   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:16.752788   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:16.793883   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:16.793924   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:16.825504   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:16.825531   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:16.879008   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:16.879046   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:16.910902   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:16.910941   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.477726   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:19.494214   45407 api_server.go:72] duration metric: took 4m15.689238s to wait for apiserver process to appear ...
	I0914 22:52:19.494240   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.494281   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:19.494341   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:19.534990   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:19.535014   45407 cri.go:89] found id: ""
	I0914 22:52:19.535023   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:19.535081   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.540782   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:19.540850   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:19.570364   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:19.570390   45407 cri.go:89] found id: ""
	I0914 22:52:19.570399   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:19.570465   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.575964   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:19.576027   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:19.608023   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:19.608047   45407 cri.go:89] found id: ""
	I0914 22:52:19.608056   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:19.608098   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.612290   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:19.612343   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:19.644658   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:19.644682   45407 cri.go:89] found id: ""
	I0914 22:52:19.644692   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:19.644743   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.651016   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:19.651092   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:19.693035   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:19.693059   45407 cri.go:89] found id: ""
	I0914 22:52:19.693068   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:19.693122   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.697798   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:19.697864   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:19.733805   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.733828   45407 cri.go:89] found id: ""
	I0914 22:52:19.733837   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:19.733890   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.737902   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:19.737976   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:19.765139   45407 cri.go:89] found id: ""
	I0914 22:52:19.765169   45407 logs.go:284] 0 containers: []
	W0914 22:52:19.765180   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:19.765188   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:19.765248   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:19.793734   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.793756   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:19.793761   45407 cri.go:89] found id: ""
	I0914 22:52:19.793767   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:19.793807   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.797559   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.801472   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:19.801492   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:19.937110   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:19.937138   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.987564   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:19.987599   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.436138   46412 addons.go:502] enable addons completed in 2.506819532s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:19.523044   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.523077   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.523089   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.523096   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.523103   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.523109   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.523115   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.523124   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.523137   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.523164   46412 retry.go:31] will retry after 369.359833ms: missing components: kube-dns
	I0914 22:52:19.900488   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.900529   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.900541   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.900550   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.900558   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.900564   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.900571   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.900587   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.900608   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.900630   46412 retry.go:31] will retry after 329.450987ms: missing components: kube-dns
	I0914 22:52:20.245124   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.245152   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.245160   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.245166   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.245171   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.245177   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.245185   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.245194   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.245204   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.245225   46412 retry.go:31] will retry after 392.738624ms: missing components: kube-dns
	I0914 22:52:20.645671   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.645706   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.645716   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.645725   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.645737   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.645747   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.645756   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.645770   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.645783   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.645803   46412 retry.go:31] will retry after 463.608084ms: missing components: kube-dns
	I0914 22:52:21.118889   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:21.118920   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Running
	I0914 22:52:21.118926   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:21.118931   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:21.118937   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:21.118941   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:21.118946   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:21.118954   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:21.118963   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:21.118971   46412 system_pods.go:126] duration metric: took 1.886741356s to wait for k8s-apps to be running ...
	I0914 22:52:21.118984   46412 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:21.119025   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:21.134331   46412 system_svc.go:56] duration metric: took 15.34035ms WaitForService to wait for kubelet.
	I0914 22:52:21.134358   46412 kubeadm.go:581] duration metric: took 4.143541631s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:21.134381   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:21.137182   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:21.137207   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:21.137230   46412 node_conditions.go:105] duration metric: took 2.834168ms to run NodePressure ...
	I0914 22:52:21.137243   46412 start.go:228] waiting for startup goroutines ...
	I0914 22:52:21.137252   46412 start.go:233] waiting for cluster config update ...
	I0914 22:52:21.137272   46412 start.go:242] writing updated cluster config ...
	I0914 22:52:21.137621   46412 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:21.184252   46412 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:21.186251   46412 out.go:177] * Done! kubectl is now configured to use "embed-certs-588699" cluster and "default" namespace by default
	I0914 22:52:20.022483   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:20.022512   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:20.062375   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:20.062403   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:20.099744   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:20.099776   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:20.129490   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:20.129515   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:20.165896   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:20.165922   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:20.692724   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:20.692758   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:20.761038   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:20.761086   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:20.777087   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:20.777114   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:20.808980   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:20.809020   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:20.845904   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:20.845942   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.393816   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:52:23.399946   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:52:23.401251   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:23.401271   45407 api_server.go:131] duration metric: took 3.907024801s to wait for apiserver health ...
	I0914 22:52:23.401279   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:23.401303   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:23.401346   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:23.433871   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.433895   45407 cri.go:89] found id: ""
	I0914 22:52:23.433905   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:23.433962   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.438254   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:23.438317   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:23.468532   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:23.468555   45407 cri.go:89] found id: ""
	I0914 22:52:23.468564   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:23.468626   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.473599   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:23.473658   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:23.509951   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:23.509976   45407 cri.go:89] found id: ""
	I0914 22:52:23.509986   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:23.510041   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.516637   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:23.516722   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:23.549562   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.549587   45407 cri.go:89] found id: ""
	I0914 22:52:23.549596   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:23.549653   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.553563   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:23.553626   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:23.584728   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:23.584749   45407 cri.go:89] found id: ""
	I0914 22:52:23.584756   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:23.584797   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.588600   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:23.588653   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:23.616590   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.616609   45407 cri.go:89] found id: ""
	I0914 22:52:23.616617   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:23.616669   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.620730   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:23.620782   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:23.648741   45407 cri.go:89] found id: ""
	I0914 22:52:23.648765   45407 logs.go:284] 0 containers: []
	W0914 22:52:23.648773   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:23.648781   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:23.648831   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:23.680814   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:23.680839   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:23.680846   45407 cri.go:89] found id: ""
	I0914 22:52:23.680854   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:23.680914   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.685954   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.690428   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:23.690459   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:23.818421   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:23.818456   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.867863   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:23.867894   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.903362   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:23.903393   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:23.943793   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:23.943820   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:24.538337   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:24.538390   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:24.585031   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:24.585072   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:24.639086   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:24.639120   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:24.650905   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:24.650925   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:24.698547   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:24.698590   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:24.745590   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:24.745619   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:24.777667   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:24.777697   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:24.811536   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:24.811565   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:25.132299   46713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (12.552094274s)
	I0914 22:52:25.132371   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:25.146754   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:52:25.155324   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:52:25.164387   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:52:25.164429   46713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0914 22:52:25.227970   46713 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0914 22:52:25.228029   46713 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:52:25.376482   46713 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:52:25.376603   46713 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:52:25.376721   46713 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:52:25.536163   46713 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:52:25.536339   46713 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:52:25.543555   46713 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0914 22:52:25.663579   46713 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:52:25.665315   46713 out.go:204]   - Generating certificates and keys ...
	I0914 22:52:25.665428   46713 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:52:25.665514   46713 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:52:25.665610   46713 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:52:25.665688   46713 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:52:25.665777   46713 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:52:25.665844   46713 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:52:25.665925   46713 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:52:25.666002   46713 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:52:25.666095   46713 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:52:25.666223   46713 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:52:25.666277   46713 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:52:25.666352   46713 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:52:25.931689   46713 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:52:26.088693   46713 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:52:26.251867   46713 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:52:26.566157   46713 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:52:26.567520   46713 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:52:27.360740   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:52:27.360780   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.360788   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.360795   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.360802   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.360809   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.360816   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.360827   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.360841   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.360848   45407 system_pods.go:74] duration metric: took 3.959563404s to wait for pod list to return data ...
	I0914 22:52:27.360859   45407 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:27.363690   45407 default_sa.go:45] found service account: "default"
	I0914 22:52:27.363715   45407 default_sa.go:55] duration metric: took 2.849311ms for default service account to be created ...
	I0914 22:52:27.363724   45407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:27.372219   45407 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:27.372520   45407 system_pods.go:89] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.372552   45407 system_pods.go:89] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.372571   45407 system_pods.go:89] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.372590   45407 system_pods.go:89] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.372602   45407 system_pods.go:89] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.372616   45407 system_pods.go:89] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.372744   45407 system_pods.go:89] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.372835   45407 system_pods.go:89] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.372845   45407 system_pods.go:126] duration metric: took 9.100505ms to wait for k8s-apps to be running ...
	I0914 22:52:27.372854   45407 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:27.373084   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:27.390112   45407 system_svc.go:56] duration metric: took 17.249761ms WaitForService to wait for kubelet.
	I0914 22:52:27.390137   45407 kubeadm.go:581] duration metric: took 4m23.585167656s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:27.390174   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:27.393099   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:27.393123   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:27.393133   45407 node_conditions.go:105] duration metric: took 2.953927ms to run NodePressure ...
	I0914 22:52:27.393142   45407 start.go:228] waiting for startup goroutines ...
	I0914 22:52:27.393148   45407 start.go:233] waiting for cluster config update ...
	I0914 22:52:27.393156   45407 start.go:242] writing updated cluster config ...
	I0914 22:52:27.393379   45407 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:27.441228   45407 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:27.442889   45407 out.go:177] * Done! kubectl is now configured to use "no-preload-344363" cluster and "default" namespace by default
	I0914 22:52:26.569354   46713 out.go:204]   - Booting up control plane ...
	I0914 22:52:26.569484   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:52:26.582407   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:52:26.589858   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:52:26.591607   46713 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:52:26.596764   46713 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:52:37.101083   46713 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503887 seconds
	I0914 22:52:37.101244   46713 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:37.116094   46713 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:37.633994   46713 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:37.634186   46713 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-930717 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 22:52:38.144071   46713 kubeadm.go:322] [bootstrap-token] Using token: jnf2g9.h0rslaob8wj902ym
	I0914 22:52:38.145543   46713 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:38.145661   46713 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:38.153514   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:38.159575   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:38.164167   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:38.167903   46713 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:38.241317   46713 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:38.572283   46713 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:38.572309   46713 kubeadm.go:322] 
	I0914 22:52:38.572399   46713 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:38.572410   46713 kubeadm.go:322] 
	I0914 22:52:38.572526   46713 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:38.572547   46713 kubeadm.go:322] 
	I0914 22:52:38.572581   46713 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:38.572669   46713 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:38.572762   46713 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:38.572775   46713 kubeadm.go:322] 
	I0914 22:52:38.572836   46713 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:38.572926   46713 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:38.573012   46713 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:38.573020   46713 kubeadm.go:322] 
	I0914 22:52:38.573089   46713 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0914 22:52:38.573152   46713 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:38.573159   46713 kubeadm.go:322] 
	I0914 22:52:38.573222   46713 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573313   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:38.573336   46713 kubeadm.go:322]     --control-plane 	  
	I0914 22:52:38.573343   46713 kubeadm.go:322] 
	I0914 22:52:38.573406   46713 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:38.573414   46713 kubeadm.go:322] 
	I0914 22:52:38.573527   46713 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573687   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:38.574219   46713 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:38.574248   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:52:38.574261   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:38.575900   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:38.577300   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:38.587120   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:38.610197   46713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:38.610265   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.610267   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=old-k8s-version-930717 minikube.k8s.io/updated_at=2023_09_14T22_52_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.858082   46713 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:38.858297   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.960045   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:39.549581   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.049788   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.549998   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.049043   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.549875   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.049596   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.549039   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.049563   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.549663   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.049534   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.549938   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.049227   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.549171   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.049628   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.550019   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.049857   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.549272   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.049648   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.549709   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.049770   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.550050   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.048948   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.549154   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.049695   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.549811   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.049813   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.549858   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.049505   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.149056   46713 kubeadm.go:1081] duration metric: took 14.538858246s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:53.149093   46713 kubeadm.go:406] StartCluster complete in 5m40.2118148s
	I0914 22:52:53.149114   46713 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.149200   46713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:53.150928   46713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.151157   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:53.151287   46713 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:53.151382   46713 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151391   46713 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151405   46713 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-930717"
	I0914 22:52:53.151411   46713 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-930717"
	W0914 22:52:53.151413   46713 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:53.151419   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:52:53.151423   46713 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-930717"
	W0914 22:52:53.151433   46713 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:53.151479   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151412   46713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-930717"
	I0914 22:52:53.151484   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151796   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151820   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151958   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.152044   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.170764   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0914 22:52:53.170912   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0914 22:52:53.171012   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0914 22:52:53.171235   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171345   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171378   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171850   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171870   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171970   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171991   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171999   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.172019   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.172232   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172517   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172572   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172759   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.172910   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.172987   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.173110   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.173146   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.189453   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0914 22:52:53.189789   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.190229   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.190251   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.190646   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.190822   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.192990   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.195176   46713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:53.194738   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0914 22:52:53.196779   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:53.196797   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:53.196813   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.195752   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.197457   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.197476   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.197849   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.198026   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.200022   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.200176   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.201917   46713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:53.200654   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.200795   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.203540   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.203632   46713 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.203652   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.203844   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.204002   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.206460   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.206968   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.206998   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.207153   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.207303   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.207524   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.207672   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.253944   46713 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-930717"
	W0914 22:52:53.253968   46713 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:53.253990   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.254330   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.254377   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0914 22:52:53.270047   46713 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-930717" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0914 22:52:53.270077   46713 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0914 22:52:53.270099   46713 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:53.271730   46713 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:53.270422   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0914 22:52:53.273255   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:53.273653   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.274180   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.274206   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.274559   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.275121   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.275165   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.291000   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0914 22:52:53.291405   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.291906   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.291927   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.292312   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.292529   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.294366   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.294583   46713 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.294598   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:53.294611   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.297265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.297809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297895   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.298057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.298236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.298383   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.344235   46713 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.344478   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:53.350176   46713 node_ready.go:49] node "old-k8s-version-930717" has status "Ready":"True"
	I0914 22:52:53.350196   46713 node_ready.go:38] duration metric: took 5.934445ms waiting for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.350204   46713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:53.359263   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:53.359296   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:53.367792   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:53.384576   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.397687   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:53.397703   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:53.439813   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:53.439843   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:53.473431   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.499877   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:54.233171   46713 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:54.365130   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365156   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365178   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365198   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365438   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365465   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365476   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365481   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.365486   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365546   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365556   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365565   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365574   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367064   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367090   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367068   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367489   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367513   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367526   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.367540   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367489   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367757   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367810   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367852   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.830646   46713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330728839s)
	I0914 22:52:54.830698   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.830711   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831036   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831059   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831065   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.831080   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.831096   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831312   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831328   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831338   46713 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-930717"
	I0914 22:52:54.832992   46713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:54.834828   46713 addons.go:502] enable addons completed in 1.683549699s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:55.415046   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:57.878279   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:59.879299   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:01.879559   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:03.880088   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:05.880334   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.880355   46713 pod_ready.go:81] duration metric: took 12.512536425s waiting for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.880364   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885370   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.885386   46713 pod_ready.go:81] duration metric: took 5.016722ms waiting for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885394   46713 pod_ready.go:38] duration metric: took 12.535181673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:53:05.885413   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:53:05.885466   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:53:05.901504   46713 api_server.go:72] duration metric: took 12.631380008s to wait for apiserver process to appear ...
	I0914 22:53:05.901522   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:53:05.901534   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:53:05.907706   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:53:05.908445   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:53:05.908466   46713 api_server.go:131] duration metric: took 6.937898ms to wait for apiserver health ...
	I0914 22:53:05.908475   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:53:05.911983   46713 system_pods.go:59] 5 kube-system pods found
	I0914 22:53:05.912001   46713 system_pods.go:61] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.912008   46713 system_pods.go:61] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.912013   46713 system_pods.go:61] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.912022   46713 system_pods.go:61] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.912033   46713 system_pods.go:61] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.912043   46713 system_pods.go:74] duration metric: took 3.562804ms to wait for pod list to return data ...
	I0914 22:53:05.912054   46713 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:53:05.914248   46713 default_sa.go:45] found service account: "default"
	I0914 22:53:05.914267   46713 default_sa.go:55] duration metric: took 2.203622ms for default service account to be created ...
	I0914 22:53:05.914276   46713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:53:05.917292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:05.917310   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.917315   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.917319   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.917325   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.917331   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.917343   46713 retry.go:31] will retry after 277.910308ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.201147   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.201170   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.201175   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.201179   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.201185   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.201191   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.201205   46713 retry.go:31] will retry after 262.96693ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.470372   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.470410   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.470418   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.470425   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.470435   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.470446   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.470481   46713 retry.go:31] will retry after 486.428451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.961666   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.961693   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.961700   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.961706   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.961716   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.961724   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.961740   46713 retry.go:31] will retry after 524.467148ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:07.491292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:07.491315   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:07.491321   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:07.491325   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:07.491331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:07.491337   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:07.491370   46713 retry.go:31] will retry after 567.308028ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.063587   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.063612   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.063618   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.063622   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.063629   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.063635   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.063649   46713 retry.go:31] will retry after 723.150919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.791530   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.791561   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.791571   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.791578   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.791588   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.791597   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.791616   46713 retry.go:31] will retry after 1.173741151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:09.971866   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:09.971895   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:09.971903   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:09.971909   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:09.971919   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:09.971928   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:09.971946   46713 retry.go:31] will retry after 1.046713916s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:11.024191   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:11.024220   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:11.024226   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:11.024231   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:11.024238   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:11.024244   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:11.024260   46713 retry.go:31] will retry after 1.531910243s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:12.562517   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:12.562555   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:12.562564   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:12.562573   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:12.562584   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:12.562594   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:12.562612   46713 retry.go:31] will retry after 2.000243773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:14.570247   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:14.570284   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:14.570294   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:14.570303   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:14.570320   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:14.570329   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:14.570346   46713 retry.go:31] will retry after 2.095330784s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:16.670345   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:16.670372   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:16.670377   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:16.670382   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:16.670394   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:16.670401   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:16.670416   46713 retry.go:31] will retry after 2.811644755s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:19.488311   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:19.488339   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:19.488344   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:19.488348   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:19.488354   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:19.488362   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:19.488380   46713 retry.go:31] will retry after 3.274452692s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:22.768417   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:22.768446   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:22.768454   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:22.768461   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:22.768471   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:22.768481   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:22.768499   46713 retry.go:31] will retry after 5.52037196s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:28.294932   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:28.294958   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:28.294964   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:28.294967   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:28.294975   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:28.294980   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:28.294994   46713 retry.go:31] will retry after 4.305647383s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:32.605867   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:32.605894   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:32.605900   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:32.605903   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:32.605910   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:32.605915   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:32.605929   46713 retry.go:31] will retry after 8.214918081s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:40.825284   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:40.825314   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:40.825319   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:40.825324   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:40.825331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:40.825336   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:40.825352   46713 retry.go:31] will retry after 10.5220598s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:51.353809   46713 system_pods.go:86] 7 kube-system pods found
	I0914 22:53:51.353844   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:51.353851   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:51.353856   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Pending
	I0914 22:53:51.353862   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:51.353868   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Pending
	I0914 22:53:51.353878   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:51.353887   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:51.353907   46713 retry.go:31] will retry after 10.482387504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:54:01.842876   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:01.842900   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:01.842905   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:01.842909   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Pending
	I0914 22:54:01.842914   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:01.842918   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Pending
	I0914 22:54:01.842921   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:01.842925   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:01.842931   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:01.842937   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:01.842950   46713 retry.go:31] will retry after 14.535469931s: missing components: etcd, kube-controller-manager
	I0914 22:54:16.384703   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:16.384732   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:16.384738   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:16.384742   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Running
	I0914 22:54:16.384747   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:16.384751   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Running
	I0914 22:54:16.384754   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:16.384758   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:16.384766   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:16.384773   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:16.384782   46713 system_pods.go:126] duration metric: took 1m10.470499333s to wait for k8s-apps to be running ...
	I0914 22:54:16.384791   46713 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:54:16.384849   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:54:16.409329   46713 system_svc.go:56] duration metric: took 24.530447ms WaitForService to wait for kubelet.
	I0914 22:54:16.409359   46713 kubeadm.go:581] duration metric: took 1m23.139238057s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:54:16.409385   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:54:16.412461   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:54:16.412490   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:54:16.412505   46713 node_conditions.go:105] duration metric: took 3.107771ms to run NodePressure ...
	I0914 22:54:16.412519   46713 start.go:228] waiting for startup goroutines ...
	I0914 22:54:16.412529   46713 start.go:233] waiting for cluster config update ...
	I0914 22:54:16.412546   46713 start.go:242] writing updated cluster config ...
	I0914 22:54:16.412870   46713 ssh_runner.go:195] Run: rm -f paused
	I0914 22:54:16.460181   46713 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I0914 22:54:16.461844   46713 out.go:177] 
	W0914 22:54:16.463221   46713 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0914 22:54:16.464486   46713 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0914 22:54:16.465912   46713 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-930717" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:46:13 UTC, ends at Thu 2023-09-14 23:00:17 UTC. --
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.028422052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=21d91db5-9246-46f9-8ae7-410cf32bf1f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.028679574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=21d91db5-9246-46f9-8ae7-410cf32bf1f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.062146532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7c538e4a-2bf9-4f6f-91c1-4e30139ced66 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.062254744Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7c538e4a-2bf9-4f6f-91c1-4e30139ced66 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.062597268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c538e4a-2bf9-4f6f-91c1-4e30139ced66 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.098926473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2ee8177e-a775-4f63-9f85-22d868cc916d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.099070158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2ee8177e-a775-4f63-9f85-22d868cc916d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.099270216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2ee8177e-a775-4f63-9f85-22d868cc916d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.131812843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=76201be9-6611-41bb-bf2b-012ddfa0f328 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.131876900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=76201be9-6611-41bb-bf2b-012ddfa0f328 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.132125399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=76201be9-6611-41bb-bf2b-012ddfa0f328 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.165153069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=25bd4377-b96b-4be3-b921-3f2766531cc2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.165228394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=25bd4377-b96b-4be3-b921-3f2766531cc2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.165417649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=25bd4377-b96b-4be3-b921-3f2766531cc2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.211070477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fb6fdc3d-8b0a-41c8-94bd-939552f5f4df name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.211261656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fb6fdc3d-8b0a-41c8-94bd-939552f5f4df name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.211552972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fb6fdc3d-8b0a-41c8-94bd-939552f5f4df name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.243188043Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fa81e06f-5a81-431f-86f1-da49d88bb463 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.243275833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fa81e06f-5a81-431f-86f1-da49d88bb463 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.243546125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fa81e06f-5a81-431f-86f1-da49d88bb463 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.268619102Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=cd7cd49a-67c5-4fc6-a472-63127bbcfaea name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.268856970Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&PodSandboxMetadata{Name:busybox,Uid:012aa3b5-77e6-4f18-a715-0b2b77e4caf8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731615303640666,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:46:47.338046708Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-8phxz,Uid:45bf5b67-3fc3-4aa7-90a0-2a2957384380,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:169473
1615000870355,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:46:47.338047770Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1ca20be8af2c9ef05b857598f1736a0cab9287ba3ffa9bf67914c5d0f5518e17,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-hfgp8,Uid:09b0d4cf-ab11-4677-88c4-f530af4643e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731611403460644,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-hfgp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09b0d4cf-ab11-4677-88c4-f530af4643e1,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14
T22:46:47.338044233Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731607688292678,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-14T22:46:47.338045408Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&PodSandboxMetadata{Name:kube-proxy-j2qmv,Uid:ca04e473-7bc4-4d56-ade1-0ae559f40dc9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731607684034748,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca04e473-7bc4-4d56-ade1-0ae559f40dc9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2023-09-14T22:46:47.338038508Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-799144,Uid:a01685043f02c1752cc818897c65fee3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731600876927320,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a01685043f02c1752cc818897c65fee3,kubernetes.io/config.seen: 2023-09-14T22:46:40.339196779Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-79
9144,Uid:0c563be4e3599500e857b86431f33760,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731600860303147,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c563be4e3599500e857b86431f33760,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0c563be4e3599500e857b86431f33760,kubernetes.io/config.seen: 2023-09-14T22:46:40.339195549Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-799144,Uid:bd18e0cb5393d8437d879abb73f5beea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731600852018677,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-def
ault-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.175:8444,kubernetes.io/config.hash: bd18e0cb5393d8437d879abb73f5beea,kubernetes.io/config.seen: 2023-09-14T22:46:40.339191908Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-799144,Uid:80294e3a8555a1593a1f189f3871c227,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731600840565533,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clie
nt-urls: https://192.168.50.175:2379,kubernetes.io/config.hash: 80294e3a8555a1593a1f189f3871c227,kubernetes.io/config.seen: 2023-09-14T22:46:40.339197599Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=cd7cd49a-67c5-4fc6-a472-63127bbcfaea name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.269546567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5fbf7070-260b-4908-8329-0a08ec201a3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.269622691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5fbf7070-260b-4908-8329-0a08ec201a3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:00:17 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:00:17.269822443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5fbf7070-260b-4908-8329-0a08ec201a3b name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	f5ece5e451cf6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   ce40ecb757b40
	099955c517e1f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   88a2d3d4437e5
	809210de2cd64       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   130c356cb6471
	da519760d06f2       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      13 minutes ago      Running             kube-proxy                1                   c10b5135af26c
	5a644b09188e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   ce40ecb757b40
	95a2e35f25145       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   c9096b8ed93e7
	8e23190d2ef54       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      13 minutes ago      Running             kube-scheduler            1                   aaa2117b4c309
	f149a35f98826       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      13 minutes ago      Running             kube-apiserver            1                   83df4bc3f4baf
	dae1ba10c6d57       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      13 minutes ago      Running             kube-controller-manager   1                   5ed2f39d120a2
	
	* 
	* ==> coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50917 - 63898 "HINFO IN 8693031495787485691.1317873420319016237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006894794s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-799144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-799144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=default-k8s-diff-port-799144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_39_45_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:39:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-799144
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 23:00:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:57:30 +0000   Thu, 14 Sep 2023 22:39:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:57:30 +0000   Thu, 14 Sep 2023 22:39:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:57:30 +0000   Thu, 14 Sep 2023 22:39:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:57:30 +0000   Thu, 14 Sep 2023 22:46:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.175
	  Hostname:    default-k8s-diff-port-799144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4a9f75a6867453fb762cb9af543d17a
	  System UUID:                b4a9f75a-6867-453f-b762-cb9af543d17a
	  Boot ID:                    79147eff-56bd-419b-a416-69d8f252b3e9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-8phxz                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-799144                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-799144             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-799144    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-j2qmv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-799144             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-57f55c9bc5-hfgp8                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-799144 status is now: NodeReady
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-799144 event: Registered Node default-k8s-diff-port-799144 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-799144 event: Registered Node default-k8s-diff-port-799144 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep14 22:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.211257] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.796993] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135698] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.454178] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.838915] systemd-fstab-generator[634]: Ignoring "noauto" for root device
	[  +0.111182] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.129885] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.121099] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.194586] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[ +16.977526] systemd-fstab-generator[905]: Ignoring "noauto" for root device
	[ +14.956871] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] <==
	* {"level":"warn","ts":"2023-09-14T22:46:50.705635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.401638ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2373267904961486132 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:coredns\" value_size:348 >> failure:<>>","response":"size:5"}
	{"level":"info","ts":"2023-09-14T22:46:50.707062Z","caller":"traceutil/trace.go:171","msg":"trace[329880174] linearizableReadLoop","detail":"{readStateIndex:472; appliedIndex:469; }","duration":"783.743003ms","start":"2023-09-14T22:46:49.923304Z","end":"2023-09-14T22:46:50.707047Z","steps":["trace[329880174] 'read index received'  (duration: 544.874827ms)","trace[329880174] 'applied index is now lower than readState.Index'  (duration: 238.867196ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-14T22:46:50.707173Z","caller":"traceutil/trace.go:171","msg":"trace[1706170364] transaction","detail":"{read_only:false; number_of_response:0; response_revision:445; }","duration":"784.221628ms","start":"2023-09-14T22:46:49.922942Z","end":"2023-09-14T22:46:50.707163Z","steps":["trace[1706170364] 'process raft request'  (duration: 545.227743ms)","trace[1706170364] 'compare'  (duration: 237.367447ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T22:46:50.707254Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T22:46:49.922928Z","time spent":"784.292428ms","remote":"127.0.0.1:36360","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":29,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:coredns\" value_size:348 >> failure:<>"}
	{"level":"warn","ts":"2023-09-14T22:46:50.707587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"784.288509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"warn","ts":"2023-09-14T22:46:50.707616Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.154358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3738"}
	{"level":"info","ts":"2023-09-14T22:46:50.707679Z","caller":"traceutil/trace.go:171","msg":"trace[1991238610] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:446; }","duration":"244.219821ms","start":"2023-09-14T22:46:50.463449Z","end":"2023-09-14T22:46:50.707669Z","steps":["trace[1991238610] 'agreement among raft nodes before linearized reading'  (duration: 244.113183ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:46:50.707644Z","caller":"traceutil/trace.go:171","msg":"trace[1622378920] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:446; }","duration":"784.35304ms","start":"2023-09-14T22:46:49.923283Z","end":"2023-09-14T22:46:50.707636Z","steps":["trace[1622378920] 'agreement among raft nodes before linearized reading'  (duration: 784.254183ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:46:50.70791Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T22:46:49.923274Z","time spent":"784.623247ms","remote":"127.0.0.1:36326","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":238,"request content":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" "}
	{"level":"info","ts":"2023-09-14T22:46:50.708322Z","caller":"traceutil/trace.go:171","msg":"trace[438214368] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"784.816408ms","start":"2023-09-14T22:46:49.923497Z","end":"2023-09-14T22:46:50.708313Z","steps":["trace[438214368] 'process raft request'  (duration: 783.315608ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:46:50.708402Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T22:46:49.923485Z","time spent":"784.890578ms","remote":"127.0.0.1:36298","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-799144.1784e5695726744a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-799144.1784e5695726744a\" value_size:694 lease:2373267904961486124 >> failure:<>"}
	{"level":"warn","ts":"2023-09-14T22:46:50.708526Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T22:46:49.976413Z","time spent":"732.111516ms","remote":"127.0.0.1:36396","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2023-09-14T22:46:52.582216Z","caller":"traceutil/trace.go:171","msg":"trace[1023979837] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"103.053583ms","start":"2023-09-14T22:46:52.479149Z","end":"2023-09-14T22:46:52.582203Z","steps":["trace[1023979837] 'process raft request'  (duration: 102.96938ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:46:52.595454Z","caller":"traceutil/trace.go:171","msg":"trace[1181550478] linearizableReadLoop","detail":"{readStateIndex:545; appliedIndex:544; }","duration":"103.852409ms","start":"2023-09-14T22:46:52.49159Z","end":"2023-09-14T22:46:52.595443Z","steps":["trace[1181550478] 'read index received'  (duration: 90.765779ms)","trace[1181550478] 'applied index is now lower than readState.Index'  (duration: 13.086212ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T22:46:52.595615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.031007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-799144\" ","response":"range_response_count:1 size:5714"}
	{"level":"info","ts":"2023-09-14T22:46:52.595673Z","caller":"traceutil/trace.go:171","msg":"trace[479671844] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-799144; range_end:; response_count:1; response_revision:511; }","duration":"104.105269ms","start":"2023-09-14T22:46:52.491562Z","end":"2023-09-14T22:46:52.595667Z","steps":["trace[479671844] 'agreement among raft nodes before linearized reading'  (duration: 103.981306ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:46:52.59588Z","caller":"traceutil/trace.go:171","msg":"trace[288169016] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"111.684904ms","start":"2023-09-14T22:46:52.484188Z","end":"2023-09-14T22:46:52.595873Z","steps":["trace[288169016] 'process raft request'  (duration: 111.179449ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:47:38.456548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.414694ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2373267904961486725 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" mod_revision:562 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-14T22:47:38.456807Z","caller":"traceutil/trace.go:171","msg":"trace[1089433272] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"123.842606ms","start":"2023-09-14T22:47:38.332934Z","end":"2023-09-14T22:47:38.456776Z","steps":["trace[1089433272] 'process raft request'  (duration: 123.808849ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:47:38.457174Z","caller":"traceutil/trace.go:171","msg":"trace[419564732] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"310.858481ms","start":"2023-09-14T22:47:38.146297Z","end":"2023-09-14T22:47:38.457156Z","steps":["trace[419564732] 'process raft request'  (duration: 194.63582ms)","trace[419564732] 'compare'  (duration: 115.311609ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T22:47:38.457261Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T22:47:38.146282Z","time spent":"310.935409ms","remote":"127.0.0.1:36340","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":600,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" mod_revision:562 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" > >"}
	{"level":"info","ts":"2023-09-14T22:47:38.457477Z","caller":"traceutil/trace.go:171","msg":"trace[995560540] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"239.590026ms","start":"2023-09-14T22:47:38.217877Z","end":"2023-09-14T22:47:38.457467Z","steps":["trace[995560540] 'process raft request'  (duration: 238.793725ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:56:45.249916Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":786}
	{"level":"info","ts":"2023-09-14T22:56:45.255374Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":786,"took":"4.513283ms","hash":2275629792}
	{"level":"info","ts":"2023-09-14T22:56:45.255539Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2275629792,"revision":786,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  23:00:17 up 14 min,  0 users,  load average: 0.15, 0.16, 0.10
	Linux default-k8s-diff-port-799144 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] <==
	* E0914 22:56:47.836190       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 22:56:47.836198       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0914 22:56:47.836112       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:56:47.837437       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 22:57:46.689538       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.35.95:443: connect: connection refused
	I0914 22:57:46.689633       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 22:57:47.837191       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:57:47.837438       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 22:57:47.837449       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 22:57:47.838384       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:57:47.838489       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:57:47.838498       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 22:58:46.689517       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.35.95:443: connect: connection refused
	I0914 22:58:46.689570       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 22:59:46.690103       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.35.95:443: connect: connection refused
	I0914 22:59:46.690179       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 22:59:47.838334       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:59:47.838395       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 22:59:47.838402       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 22:59:47.839518       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:59:47.839711       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:59:47.839748       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] <==
	* I0914 22:54:32.085091       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:55:01.579466       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:55:02.093921       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:55:31.586524       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:55:32.103189       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:56:01.592330       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:56:02.112596       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:56:31.598222       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:56:32.121845       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:57:01.605275       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:57:02.130481       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:57:31.610026       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:57:32.140520       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 22:57:49.391080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="414µs"
	I0914 22:58:00.391233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="268.941µs"
	E0914 22:58:01.615782       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:58:02.149948       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:58:31.620739       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:58:32.157794       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:59:01.628164       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:59:02.167078       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:59:31.633363       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:59:32.179580       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:00:01.640696       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:00:02.187507       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] <==
	* I0914 22:46:49.480198       1 server_others.go:69] "Using iptables proxy"
	I0914 22:46:49.925014       1 node.go:141] Successfully retrieved node IP: 192.168.50.175
	I0914 22:46:49.967675       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:46:49.967813       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:46:49.970819       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:46:49.970916       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:46:49.971364       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:46:49.971585       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:46:49.972452       1 config.go:188] "Starting service config controller"
	I0914 22:46:49.972494       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:46:49.972515       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:46:49.972519       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:46:49.973044       1 config.go:315] "Starting node config controller"
	I0914 22:46:49.973073       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:46:50.072837       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 22:46:50.073027       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:46:50.073287       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] <==
	* I0914 22:46:44.200410       1 serving.go:348] Generated self-signed cert in-memory
	W0914 22:46:46.810859       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 22:46:46.810904       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:46:46.810920       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:46:46.810926       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:46:46.845927       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:46:46.846087       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:46:46.847291       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:46:46.847372       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:46:46.848081       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:46:46.848156       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:46:46.947663       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:46:13 UTC, ends at Thu 2023-09-14 23:00:17 UTC. --
	Sep 14 22:57:36 default-k8s-diff-port-799144 kubelet[911]: E0914 22:57:36.391454     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:57:40 default-k8s-diff-port-799144 kubelet[911]: E0914 22:57:40.386742     911 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:57:40 default-k8s-diff-port-799144 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:57:40 default-k8s-diff-port-799144 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:57:40 default-k8s-diff-port-799144 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:57:49 default-k8s-diff-port-799144 kubelet[911]: E0914 22:57:49.373008     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:58:00 default-k8s-diff-port-799144 kubelet[911]: E0914 22:58:00.374575     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:58:12 default-k8s-diff-port-799144 kubelet[911]: E0914 22:58:12.373669     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:58:24 default-k8s-diff-port-799144 kubelet[911]: E0914 22:58:24.373673     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:58:37 default-k8s-diff-port-799144 kubelet[911]: E0914 22:58:37.372748     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:58:40 default-k8s-diff-port-799144 kubelet[911]: E0914 22:58:40.387666     911 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:58:40 default-k8s-diff-port-799144 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:58:40 default-k8s-diff-port-799144 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:58:40 default-k8s-diff-port-799144 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:58:50 default-k8s-diff-port-799144 kubelet[911]: E0914 22:58:50.374303     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:59:01 default-k8s-diff-port-799144 kubelet[911]: E0914 22:59:01.373626     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:59:14 default-k8s-diff-port-799144 kubelet[911]: E0914 22:59:14.373846     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:59:27 default-k8s-diff-port-799144 kubelet[911]: E0914 22:59:27.373449     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:59:39 default-k8s-diff-port-799144 kubelet[911]: E0914 22:59:39.374038     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 22:59:40 default-k8s-diff-port-799144 kubelet[911]: E0914 22:59:40.388272     911 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:59:40 default-k8s-diff-port-799144 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:59:40 default-k8s-diff-port-799144 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:59:40 default-k8s-diff-port-799144 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:59:53 default-k8s-diff-port-799144 kubelet[911]: E0914 22:59:53.373455     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:00:07 default-k8s-diff-port-799144 kubelet[911]: E0914 23:00:07.373931     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	
	* 
	* ==> storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] <==
	* I0914 22:46:49.064878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 22:47:19.068661       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] <==
	* I0914 22:47:19.720941       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:47:19.737711       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:47:19.737767       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:47:37.145640       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:47:37.145884       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-799144_6156d333-5706-43bc-93d7-6bfcc42511b8!
	I0914 22:47:37.147833       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d62f02f3-7ad6-456b-a5fd-2b92f0ceaac6", APIVersion:"v1", ResourceVersion:"569", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-799144_6156d333-5706-43bc-93d7-6bfcc42511b8 became leader
	I0914 22:47:37.246044       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-799144_6156d333-5706-43bc-93d7-6bfcc42511b8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-799144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-hfgp8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-799144 describe pod metrics-server-57f55c9bc5-hfgp8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-799144 describe pod metrics-server-57f55c9bc5-hfgp8: exit status 1 (62.788607ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-hfgp8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-799144 describe pod metrics-server-57f55c9bc5-hfgp8: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-588699 -n embed-certs-588699
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-14 23:01:21.718698511 +0000 UTC m=+5103.913040141
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-588699 -n embed-certs-588699
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-588699 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-588699 logs -n 25: (1.425023709s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-711912                           | kubernetes-upgrade-711912    | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:36 UTC |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-344363             | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:40 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799144  | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC |                     |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-344363                  | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-588699            | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799144       | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-930717        | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:51 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-588699                 | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-930717             | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC | 14 Sep 23 22:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:45:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:45:20.513575   46713 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:45:20.513835   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.513847   46713 out.go:309] Setting ErrFile to fd 2...
	I0914 22:45:20.513852   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.514030   46713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:45:20.514571   46713 out.go:303] Setting JSON to false
	I0914 22:45:20.515550   46713 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5263,"bootTime":1694726258,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:45:20.515607   46713 start.go:138] virtualization: kvm guest
	I0914 22:45:20.517738   46713 out.go:177] * [old-k8s-version-930717] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:45:20.519301   46713 notify.go:220] Checking for updates...
	I0914 22:45:20.519309   46713 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:45:20.520886   46713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:45:20.522525   46713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:45:20.524172   46713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:45:20.525826   46713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:45:20.527204   46713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:45:20.529068   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:45:20.529489   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.529542   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.548088   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0914 22:45:20.548488   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.548969   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.548985   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.549404   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.549555   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.551507   46713 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 22:45:20.552878   46713 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:45:20.553145   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.553176   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.566825   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0914 22:45:20.567181   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.567617   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.567646   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.568018   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.568195   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.601886   46713 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:45:20.603176   46713 start.go:298] selected driver: kvm2
	I0914 22:45:20.603188   46713 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.603284   46713 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:45:20.603926   46713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.603997   46713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:45:20.617678   46713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:45:20.618009   46713 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:45:20.618045   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:45:20.618062   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:45:20.618075   46713 start_flags.go:321] config:
	{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.618204   46713 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.619892   46713 out.go:177] * Starting control plane node old-k8s-version-930717 in cluster old-k8s-version-930717
	I0914 22:45:22.939748   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:20.621146   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:45:20.621171   46713 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0914 22:45:20.621184   46713 cache.go:57] Caching tarball of preloaded images
	I0914 22:45:20.621265   46713 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 22:45:20.621286   46713 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0914 22:45:20.621381   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:45:20.621551   46713 start.go:365] acquiring machines lock for old-k8s-version-930717: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:45:29.019730   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:32.091705   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:38.171724   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:41.243661   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:47.323733   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:50.395751   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:56.475703   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:59.547782   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:46:02.551591   45954 start.go:369] acquired machines lock for "default-k8s-diff-port-799144" in 3m15.018428257s
	I0914 22:46:02.551631   45954 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:02.551642   45954 fix.go:54] fixHost starting: 
	I0914 22:46:02.551944   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:02.551972   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:02.566520   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0914 22:46:02.566922   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:02.567373   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:02.567392   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:02.567734   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:02.567961   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:02.568128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:02.569692   45954 fix.go:102] recreateIfNeeded on default-k8s-diff-port-799144: state=Stopped err=<nil>
	I0914 22:46:02.569714   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	W0914 22:46:02.569887   45954 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:02.571684   45954 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799144" ...
	I0914 22:46:02.549458   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:02.549490   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:46:02.551419   45407 machine.go:91] provisioned docker machine in 4m37.435317847s
	I0914 22:46:02.551457   45407 fix.go:56] fixHost completed within 4m37.455553972s
	I0914 22:46:02.551462   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 4m37.455581515s
	W0914 22:46:02.551502   45407 start.go:688] error starting host: provision: host is not running
	W0914 22:46:02.551586   45407 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0914 22:46:02.551600   45407 start.go:703] Will try again in 5 seconds ...
	I0914 22:46:02.573354   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Start
	I0914 22:46:02.573535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring networks are active...
	I0914 22:46:02.574326   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network default is active
	I0914 22:46:02.574644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network mk-default-k8s-diff-port-799144 is active
	I0914 22:46:02.575046   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Getting domain xml...
	I0914 22:46:02.575767   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Creating domain...
	I0914 22:46:03.792613   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting to get IP...
	I0914 22:46:03.793573   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.793932   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.794029   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:03.793928   46868 retry.go:31] will retry after 250.767464ms: waiting for machine to come up
	I0914 22:46:04.046447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046928   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.046853   46868 retry.go:31] will retry after 320.29371ms: waiting for machine to come up
	I0914 22:46:04.368383   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368782   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368814   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.368726   46868 retry.go:31] will retry after 295.479496ms: waiting for machine to come up
	I0914 22:46:04.666192   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666655   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666680   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.666595   46868 retry.go:31] will retry after 572.033699ms: waiting for machine to come up
	I0914 22:46:05.240496   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240920   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240953   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.240872   46868 retry.go:31] will retry after 493.557238ms: waiting for machine to come up
	I0914 22:46:05.735682   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736201   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.736150   46868 retry.go:31] will retry after 848.645524ms: waiting for machine to come up
	I0914 22:46:06.586116   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586568   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:06.586473   46868 retry.go:31] will retry after 866.110647ms: waiting for machine to come up
	I0914 22:46:07.553803   45407 start.go:365] acquiring machines lock for no-preload-344363: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:46:07.454431   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454798   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454827   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:07.454743   46868 retry.go:31] will retry after 1.485337575s: waiting for machine to come up
	I0914 22:46:08.941761   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942136   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942177   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:08.942104   46868 retry.go:31] will retry after 1.640651684s: waiting for machine to come up
	I0914 22:46:10.584576   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584939   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:10.584838   46868 retry.go:31] will retry after 1.656716681s: waiting for machine to come up
	I0914 22:46:12.243599   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244096   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:12.244037   46868 retry.go:31] will retry after 2.692733224s: waiting for machine to come up
	I0914 22:46:14.939726   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940035   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940064   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:14.939986   46868 retry.go:31] will retry after 2.745837942s: waiting for machine to come up
	I0914 22:46:22.180177   46412 start.go:369] acquired machines lock for "embed-certs-588699" in 2m3.238409394s
	I0914 22:46:22.180244   46412 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:22.180256   46412 fix.go:54] fixHost starting: 
	I0914 22:46:22.180661   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:22.180706   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:22.196558   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0914 22:46:22.196900   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:22.197304   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:46:22.197326   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:22.197618   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:22.197808   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:22.197986   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:46:22.199388   46412 fix.go:102] recreateIfNeeded on embed-certs-588699: state=Stopped err=<nil>
	I0914 22:46:22.199423   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	W0914 22:46:22.199595   46412 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:22.202757   46412 out.go:177] * Restarting existing kvm2 VM for "embed-certs-588699" ...
	I0914 22:46:17.687397   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687911   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687937   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:17.687878   46868 retry.go:31] will retry after 3.174192278s: waiting for machine to come up
	I0914 22:46:20.866173   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866687   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Found IP for machine: 192.168.50.175
	I0914 22:46:20.866722   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has current primary IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866737   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserving static IP address...
	I0914 22:46:20.867209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.867245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | skip adding static IP to network mk-default-k8s-diff-port-799144 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"}
	I0914 22:46:20.867263   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserved static IP address: 192.168.50.175
	I0914 22:46:20.867290   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for SSH to be available...
	I0914 22:46:20.867303   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Getting to WaitForSSH function...
	I0914 22:46:20.869597   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.869960   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.869993   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.870103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH client type: external
	I0914 22:46:20.870137   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa (-rw-------)
	I0914 22:46:20.870193   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:20.870218   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | About to run SSH command:
	I0914 22:46:20.870237   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | exit 0
	I0914 22:46:20.959125   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:20.959456   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetConfigRaw
	I0914 22:46:20.960082   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:20.962512   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.962889   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.962915   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.963114   45954 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/config.json ...
	I0914 22:46:20.963282   45954 machine.go:88] provisioning docker machine ...
	I0914 22:46:20.963300   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:20.963509   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963682   45954 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799144"
	I0914 22:46:20.963709   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963899   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:20.966359   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966728   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.966757   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966956   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:20.967146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967287   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967420   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:20.967584   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:20.967963   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:20.967983   45954 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799144 && echo "default-k8s-diff-port-799144" | sudo tee /etc/hostname
	I0914 22:46:21.098114   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799144
	
	I0914 22:46:21.098158   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.100804   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101167   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.101208   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.101532   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101855   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.102028   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.102386   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.102406   45954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799144/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:21.225929   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:21.225964   45954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:21.225992   45954 buildroot.go:174] setting up certificates
	I0914 22:46:21.226007   45954 provision.go:83] configureAuth start
	I0914 22:46:21.226023   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:21.226299   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:21.229126   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229514   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.229555   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.231683   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.231992   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.232027   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.232179   45954 provision.go:138] copyHostCerts
	I0914 22:46:21.232233   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:21.232247   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:21.232321   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:21.232412   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:21.232421   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:21.232446   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:21.232542   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:21.232551   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:21.232572   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:21.232617   45954 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799144 san=[192.168.50.175 192.168.50.175 localhost 127.0.0.1 minikube default-k8s-diff-port-799144]
	I0914 22:46:21.489180   45954 provision.go:172] copyRemoteCerts
	I0914 22:46:21.489234   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:21.489257   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.491989   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492308   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.492334   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.492734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.492869   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.493038   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:21.579991   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0914 22:46:21.599819   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:21.619391   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:21.638607   45954 provision.go:86] duration metric: configureAuth took 412.585328ms
	I0914 22:46:21.638629   45954 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:21.638797   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:21.638867   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.641693   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642033   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.642067   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.642399   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642562   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.642900   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.643239   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.643257   45954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:21.928913   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:21.928940   45954 machine.go:91] provisioned docker machine in 965.645328ms
	I0914 22:46:21.928952   45954 start.go:300] post-start starting for "default-k8s-diff-port-799144" (driver="kvm2")
	I0914 22:46:21.928964   45954 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:21.928987   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:21.929377   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:21.929425   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.931979   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932350   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.932388   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932475   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.932704   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.932923   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.933059   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.020329   45954 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:22.024444   45954 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:22.024458   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:22.024513   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:22.024589   45954 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:22.024672   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:22.033456   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:22.054409   45954 start.go:303] post-start completed in 125.445528ms
	I0914 22:46:22.054427   45954 fix.go:56] fixHost completed within 19.502785226s
	I0914 22:46:22.054444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.057353   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057690   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.057721   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057925   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.058139   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058304   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058483   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.058657   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:22.059051   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:22.059065   45954 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:22.180023   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731582.133636857
	
	I0914 22:46:22.180044   45954 fix.go:206] guest clock: 1694731582.133636857
	I0914 22:46:22.180054   45954 fix.go:219] Guest: 2023-09-14 22:46:22.133636857 +0000 UTC Remote: 2023-09-14 22:46:22.054430307 +0000 UTC m=+214.661061156 (delta=79.20655ms)
	I0914 22:46:22.180078   45954 fix.go:190] guest clock delta is within tolerance: 79.20655ms
	I0914 22:46:22.180084   45954 start.go:83] releasing machines lock for "default-k8s-diff-port-799144", held for 19.628473828s
	I0914 22:46:22.180114   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.180408   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:22.183182   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183507   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.183543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183675   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184175   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184384   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184494   45954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:22.184535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.184627   45954 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:22.184662   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.187447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187604   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187813   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.187839   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187971   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.187986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.188024   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.188151   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.188153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188344   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188391   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188500   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.188519   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188618   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.303009   45954 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:22.308185   45954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:22.450504   45954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:22.455642   45954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:22.455700   45954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:22.468430   45954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:22.468453   45954 start.go:469] detecting cgroup driver to use...
	I0914 22:46:22.468509   45954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:22.483524   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:22.494650   45954 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:22.494706   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:22.506589   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:22.518370   45954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:22.619545   45954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:22.737486   45954 docker.go:212] disabling docker service ...
	I0914 22:46:22.737551   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:22.749267   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:22.759866   45954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:22.868561   45954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:22.973780   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:22.986336   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:23.004987   45954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:23.005042   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.013821   45954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:23.013889   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.022487   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.030875   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.038964   45954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:23.047246   45954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:23.054339   45954 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:23.054379   45954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:23.066649   45954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:23.077024   45954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:23.174635   45954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:23.337031   45954 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:23.337113   45954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:23.342241   45954 start.go:537] Will wait 60s for crictl version
	I0914 22:46:23.342308   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:46:23.345832   45954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:23.377347   45954 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:23.377433   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.425559   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.492770   45954 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:22.203936   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Start
	I0914 22:46:22.204098   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring networks are active...
	I0914 22:46:22.204740   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network default is active
	I0914 22:46:22.205158   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network mk-embed-certs-588699 is active
	I0914 22:46:22.205524   46412 main.go:141] libmachine: (embed-certs-588699) Getting domain xml...
	I0914 22:46:22.206216   46412 main.go:141] libmachine: (embed-certs-588699) Creating domain...
	I0914 22:46:23.529479   46412 main.go:141] libmachine: (embed-certs-588699) Waiting to get IP...
	I0914 22:46:23.530274   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.530639   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.530694   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.530608   46986 retry.go:31] will retry after 299.617651ms: waiting for machine to come up
	I0914 22:46:23.494065   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:23.496974   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497458   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:23.497490   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497694   45954 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:23.501920   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:23.517500   45954 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:23.517542   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:23.554344   45954 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:23.554403   45954 ssh_runner.go:195] Run: which lz4
	I0914 22:46:23.558745   45954 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:23.563443   45954 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:23.563488   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:25.365372   45954 crio.go:444] Took 1.806660 seconds to copy over tarball
	I0914 22:46:25.365442   45954 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:23.832332   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.833457   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.833488   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.832911   46986 retry.go:31] will retry after 315.838121ms: waiting for machine to come up
	I0914 22:46:24.150532   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.150980   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.151009   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.150942   46986 retry.go:31] will retry after 369.928332ms: waiting for machine to come up
	I0914 22:46:24.522720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.523232   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.523257   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.523145   46986 retry.go:31] will retry after 533.396933ms: waiting for machine to come up
	I0914 22:46:25.057818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.058371   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.058405   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.058318   46986 retry.go:31] will retry after 747.798377ms: waiting for machine to come up
	I0914 22:46:25.807422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.807912   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.807956   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.807874   46986 retry.go:31] will retry after 947.037376ms: waiting for machine to come up
	I0914 22:46:26.756214   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:26.756720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:26.756757   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:26.756689   46986 retry.go:31] will retry after 1.117164865s: waiting for machine to come up
	I0914 22:46:27.875432   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:27.875931   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:27.875953   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:27.875886   46986 retry.go:31] will retry after 1.117181084s: waiting for machine to come up
	I0914 22:46:28.197684   45954 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.832216899s)
	I0914 22:46:28.197710   45954 crio.go:451] Took 2.832313 seconds to extract the tarball
	I0914 22:46:28.197718   45954 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:28.236545   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:28.286349   45954 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:28.286374   45954 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:28.286449   45954 ssh_runner.go:195] Run: crio config
	I0914 22:46:28.344205   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:28.344231   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:28.344253   45954 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:28.344289   45954 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.175 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799144 NodeName:default-k8s-diff-port-799144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:28.344454   45954 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.175
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799144"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:28.344536   45954 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-799144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0914 22:46:28.344591   45954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:28.354383   45954 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:28.354459   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:28.363277   45954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0914 22:46:28.378875   45954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:28.393535   45954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0914 22:46:28.408319   45954 ssh_runner.go:195] Run: grep 192.168.50.175	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:28.411497   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:28.421507   45954 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144 for IP: 192.168.50.175
	I0914 22:46:28.421536   45954 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:28.421702   45954 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:28.421742   45954 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:28.421805   45954 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.key
	I0914 22:46:28.421858   45954 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key.0216c1e7
	I0914 22:46:28.421894   45954 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key
	I0914 22:46:28.421994   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:28.422020   45954 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:28.422027   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:28.422048   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:28.422074   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:28.422095   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:28.422139   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:28.422695   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:28.443528   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:46:28.463679   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:28.483317   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:28.503486   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:28.523709   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:28.544539   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:28.565904   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:28.587316   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:28.611719   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:28.632158   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:28.652227   45954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:28.667709   45954 ssh_runner.go:195] Run: openssl version
	I0914 22:46:28.673084   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:28.682478   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686693   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686747   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.691836   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:28.701203   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:28.710996   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715353   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715408   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.720765   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:28.730750   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:28.740782   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745186   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745250   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.750589   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:28.760675   45954 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:28.764920   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:28.770573   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:28.776098   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:28.783455   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:28.790699   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:28.797514   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:28.804265   45954 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-799144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:28.804376   45954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:28.804427   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:28.833994   45954 cri.go:89] found id: ""
	I0914 22:46:28.834051   45954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:28.843702   45954 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:28.843724   45954 kubeadm.go:636] restartCluster start
	I0914 22:46:28.843769   45954 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:28.852802   45954 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.854420   45954 kubeconfig.go:92] found "default-k8s-diff-port-799144" server: "https://192.168.50.175:8444"
	I0914 22:46:28.858058   45954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:28.866914   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.866968   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.877946   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.877969   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.878014   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.888579   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.389311   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.389420   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.401725   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.889346   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.889451   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.902432   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.388985   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.389062   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.401302   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.888853   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.888949   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.901032   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.389622   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.389733   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.405102   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.888685   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.888803   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.904300   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:32.388876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.388944   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.402419   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.995080   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:28.999205   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:28.999224   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:28.995414   46986 retry.go:31] will retry after 1.657878081s: waiting for machine to come up
	I0914 22:46:30.655422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:30.656029   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:30.656059   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:30.655960   46986 retry.go:31] will retry after 2.320968598s: waiting for machine to come up
	I0914 22:46:32.978950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:32.979423   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:32.979452   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:32.979369   46986 retry.go:31] will retry after 2.704173643s: waiting for machine to come up
	I0914 22:46:32.889585   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.889658   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.902514   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.388806   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.388906   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.405028   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.889633   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.889728   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.906250   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.388736   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.388810   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.403376   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.888851   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.888934   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.905873   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.389446   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.389516   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.404872   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.889475   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.889569   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.902431   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.388954   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.389054   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.401778   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.889442   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.889529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.902367   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:37.388925   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.389009   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.401860   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.685608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:35.686027   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:35.686064   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:35.685964   46986 retry.go:31] will retry after 2.240780497s: waiting for machine to come up
	I0914 22:46:37.928020   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:37.928402   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:37.928442   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:37.928354   46986 retry.go:31] will retry after 2.734049647s: waiting for machine to come up
	I0914 22:46:41.860186   46713 start.go:369] acquired machines lock for "old-k8s-version-930717" in 1m21.238611742s
	I0914 22:46:41.860234   46713 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:41.860251   46713 fix.go:54] fixHost starting: 
	I0914 22:46:41.860683   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:41.860738   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:41.877474   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0914 22:46:41.877964   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:41.878542   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:46:41.878568   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:41.878874   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:41.879057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:46:41.879276   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:46:41.880990   46713 fix.go:102] recreateIfNeeded on old-k8s-version-930717: state=Stopped err=<nil>
	I0914 22:46:41.881019   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	W0914 22:46:41.881175   46713 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:41.883128   46713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-930717" ...
	I0914 22:46:37.888876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.888950   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.901522   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.389056   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:38.389140   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:38.400632   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.867426   45954 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:38.867461   45954 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:38.867487   45954 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:38.867557   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:38.898268   45954 cri.go:89] found id: ""
	I0914 22:46:38.898328   45954 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:38.914871   45954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:38.924737   45954 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:38.924785   45954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934436   45954 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934455   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.042672   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.982954   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.158791   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.235541   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.312855   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:46:40.312926   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.328687   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.842859   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.343019   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.842336   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.342351   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.665315   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.665775   46412 main.go:141] libmachine: (embed-certs-588699) Found IP for machine: 192.168.61.205
	I0914 22:46:40.665795   46412 main.go:141] libmachine: (embed-certs-588699) Reserving static IP address...
	I0914 22:46:40.665807   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has current primary IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.666273   46412 main.go:141] libmachine: (embed-certs-588699) Reserved static IP address: 192.168.61.205
	I0914 22:46:40.666316   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.666334   46412 main.go:141] libmachine: (embed-certs-588699) Waiting for SSH to be available...
	I0914 22:46:40.666375   46412 main.go:141] libmachine: (embed-certs-588699) DBG | skip adding static IP to network mk-embed-certs-588699 - found existing host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"}
	I0914 22:46:40.666401   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Getting to WaitForSSH function...
	I0914 22:46:40.668206   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668515   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.668542   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668654   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH client type: external
	I0914 22:46:40.668689   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa (-rw-------)
	I0914 22:46:40.668716   46412 main.go:141] libmachine: (embed-certs-588699) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:40.668728   46412 main.go:141] libmachine: (embed-certs-588699) DBG | About to run SSH command:
	I0914 22:46:40.668736   46412 main.go:141] libmachine: (embed-certs-588699) DBG | exit 0
	I0914 22:46:40.751202   46412 main.go:141] libmachine: (embed-certs-588699) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:40.751584   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetConfigRaw
	I0914 22:46:40.752291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:40.754685   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755054   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.755087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755318   46412 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/config.json ...
	I0914 22:46:40.755578   46412 machine.go:88] provisioning docker machine ...
	I0914 22:46:40.755603   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:40.755799   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.755940   46412 buildroot.go:166] provisioning hostname "embed-certs-588699"
	I0914 22:46:40.755959   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.756109   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.758111   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758435   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.758481   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758547   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.758686   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758798   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758983   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.759108   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.759567   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.759586   46412 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-588699 && echo "embed-certs-588699" | sudo tee /etc/hostname
	I0914 22:46:40.882559   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-588699
	
	I0914 22:46:40.882615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.885741   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.886137   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886403   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.886635   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886810   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886964   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.887176   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.887633   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.887662   46412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-588699' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-588699/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-588699' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:41.007991   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:41.008024   46412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:41.008075   46412 buildroot.go:174] setting up certificates
	I0914 22:46:41.008103   46412 provision.go:83] configureAuth start
	I0914 22:46:41.008118   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:41.008615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.011893   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012262   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.012295   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012467   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.014904   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015343   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.015378   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015551   46412 provision.go:138] copyHostCerts
	I0914 22:46:41.015605   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:41.015618   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:41.015691   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:41.015847   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:41.015864   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:41.015897   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:41.015979   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:41.015989   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:41.016019   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:41.016080   46412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.embed-certs-588699 san=[192.168.61.205 192.168.61.205 localhost 127.0.0.1 minikube embed-certs-588699]
	I0914 22:46:41.134486   46412 provision.go:172] copyRemoteCerts
	I0914 22:46:41.134537   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:41.134559   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.137472   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137789   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.137818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137995   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.138216   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.138365   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.138536   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.224196   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:41.244551   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:46:41.267745   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:41.292472   46412 provision.go:86] duration metric: configureAuth took 284.355734ms
	I0914 22:46:41.292497   46412 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:41.292668   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:41.292748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.295661   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296010   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.296042   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296246   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.296469   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296652   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296836   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.297031   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.297522   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.297556   46412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:41.609375   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:41.609417   46412 machine.go:91] provisioned docker machine in 853.82264ms
	I0914 22:46:41.609431   46412 start.go:300] post-start starting for "embed-certs-588699" (driver="kvm2")
	I0914 22:46:41.609444   46412 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:41.609472   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.609831   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:41.609890   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.613037   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613497   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.613525   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613662   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.613854   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.614023   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.614142   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.704618   46412 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:41.709759   46412 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:41.709787   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:41.709867   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:41.709991   46412 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:41.710127   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:41.721261   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:41.742359   46412 start.go:303] post-start completed in 132.913862ms
	I0914 22:46:41.742387   46412 fix.go:56] fixHost completed within 19.562130605s
	I0914 22:46:41.742418   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.745650   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.746172   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746369   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.746564   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746781   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746944   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.747138   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.747629   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.747648   46412 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:41.860006   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731601.811427748
	
	I0914 22:46:41.860030   46412 fix.go:206] guest clock: 1694731601.811427748
	I0914 22:46:41.860040   46412 fix.go:219] Guest: 2023-09-14 22:46:41.811427748 +0000 UTC Remote: 2023-09-14 22:46:41.742391633 +0000 UTC m=+142.955285980 (delta=69.036115ms)
	I0914 22:46:41.860091   46412 fix.go:190] guest clock delta is within tolerance: 69.036115ms
	I0914 22:46:41.860098   46412 start.go:83] releasing machines lock for "embed-certs-588699", held for 19.679882828s
	I0914 22:46:41.860131   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.860411   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.863136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863584   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.863618   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863721   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864206   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864398   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864477   46412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:41.864514   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.864639   46412 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:41.864666   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.867568   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.867976   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.868028   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868147   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868248   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868373   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868579   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.868691   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868833   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.868876   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.869026   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.980624   46412 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:41.986113   46412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:42.134956   46412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:42.141030   46412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:42.141101   46412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:42.158635   46412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:42.158660   46412 start.go:469] detecting cgroup driver to use...
	I0914 22:46:42.158722   46412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:42.173698   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:42.184948   46412 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:42.185007   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:42.196434   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:42.208320   46412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:42.326624   46412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:42.459498   46412 docker.go:212] disabling docker service ...
	I0914 22:46:42.459567   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:42.472479   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:42.486651   46412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:42.636161   46412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:42.739841   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:42.758562   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:42.779404   46412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:42.779472   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.787902   46412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:42.787954   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.799513   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.811428   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.823348   46412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:42.835569   46412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:42.842820   46412 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:42.842885   46412 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:42.855225   46412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:42.863005   46412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:42.979756   46412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:43.181316   46412 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:43.181384   46412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:43.191275   46412 start.go:537] Will wait 60s for crictl version
	I0914 22:46:43.191343   46412 ssh_runner.go:195] Run: which crictl
	I0914 22:46:43.196264   46412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:43.228498   46412 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:43.228589   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.281222   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.341816   46412 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:43.343277   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:43.346473   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.346835   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:43.346882   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.347084   46412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:43.351205   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:43.364085   46412 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:43.364156   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:43.400558   46412 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:43.400634   46412 ssh_runner.go:195] Run: which lz4
	I0914 22:46:43.404906   46412 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:43.409239   46412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:43.409277   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:41.885236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Start
	I0914 22:46:41.885399   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring networks are active...
	I0914 22:46:41.886125   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network default is active
	I0914 22:46:41.886511   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network mk-old-k8s-version-930717 is active
	I0914 22:46:41.886855   46713 main.go:141] libmachine: (old-k8s-version-930717) Getting domain xml...
	I0914 22:46:41.887524   46713 main.go:141] libmachine: (old-k8s-version-930717) Creating domain...
	I0914 22:46:43.317748   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting to get IP...
	I0914 22:46:43.318757   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.319197   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.319288   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.319176   47160 retry.go:31] will retry after 287.487011ms: waiting for machine to come up
	I0914 22:46:43.608890   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.609712   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.609738   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.609656   47160 retry.go:31] will retry after 289.187771ms: waiting for machine to come up
	I0914 22:46:43.900234   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.900655   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.900679   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.900576   47160 retry.go:31] will retry after 433.007483ms: waiting for machine to come up
	I0914 22:46:44.335318   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.335775   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.335804   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.335727   47160 retry.go:31] will retry after 383.295397ms: waiting for machine to come up
	I0914 22:46:44.720415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.720967   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.721001   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.720856   47160 retry.go:31] will retry after 698.454643ms: waiting for machine to come up
	I0914 22:46:45.420833   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:45.421349   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:45.421391   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:45.421297   47160 retry.go:31] will retry after 938.590433ms: waiting for machine to come up
	I0914 22:46:42.842954   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.867206   45954 api_server.go:72] duration metric: took 2.554352134s to wait for apiserver process to appear ...
	I0914 22:46:42.867238   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:46:42.867257   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.755748   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:46:46.755780   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:46:46.755832   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.873209   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:46.873243   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.373637   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.391311   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.391349   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.873646   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.880286   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.880323   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:48.373423   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:48.389682   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:46:48.415694   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:46:48.415727   45954 api_server.go:131] duration metric: took 5.548481711s to wait for apiserver health ...
	I0914 22:46:48.415739   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.415748   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.417375   45954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:46:45.238555   46412 crio.go:444] Took 1.833681 seconds to copy over tarball
	I0914 22:46:45.238634   46412 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:48.251155   46412 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012492519s)
	I0914 22:46:48.251176   46412 crio.go:451] Took 3.012596 seconds to extract the tarball
	I0914 22:46:48.251184   46412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:48.290336   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:48.338277   46412 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:48.338302   46412 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:48.338378   46412 ssh_runner.go:195] Run: crio config
	I0914 22:46:48.402542   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.402564   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.402583   46412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:48.402604   46412 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.205 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-588699 NodeName:embed-certs-588699 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:48.402791   46412 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-588699"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:48.402883   46412 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-588699 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:46:48.402958   46412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:48.414406   46412 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:48.414484   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:48.426437   46412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0914 22:46:48.445351   46412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:48.463696   46412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0914 22:46:48.481887   46412 ssh_runner.go:195] Run: grep 192.168.61.205	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:48.485825   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:48.500182   46412 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699 for IP: 192.168.61.205
	I0914 22:46:48.500215   46412 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:48.500362   46412 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:48.500417   46412 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:48.500514   46412 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/client.key
	I0914 22:46:48.500600   46412 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key.8dac69f7
	I0914 22:46:48.500726   46412 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key
	I0914 22:46:48.500885   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:48.500926   46412 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:48.500942   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:48.500976   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:48.501008   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:48.501039   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:48.501096   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:48.501918   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:48.528790   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:46:48.558557   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:48.583664   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:48.608274   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:48.631638   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:48.655163   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:48.677452   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:48.700443   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:48.724547   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:48.751559   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:48.778910   46412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:48.794369   46412 ssh_runner.go:195] Run: openssl version
	I0914 22:46:48.799778   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:48.809263   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814790   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814848   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.820454   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:48.829942   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:46.361228   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:46.361816   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:46.361846   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:46.361795   47160 retry.go:31] will retry after 1.00738994s: waiting for machine to come up
	I0914 22:46:47.370525   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:47.370964   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:47.370991   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:47.370921   47160 retry.go:31] will retry after 1.441474351s: waiting for machine to come up
	I0914 22:46:48.813921   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:48.814415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:48.814447   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:48.814362   47160 retry.go:31] will retry after 1.497562998s: waiting for machine to come up
	I0914 22:46:50.313674   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:50.314191   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:50.314221   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:50.314137   47160 retry.go:31] will retry after 1.620308161s: waiting for machine to come up
	I0914 22:46:48.418825   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:46:48.456715   45954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:46:48.496982   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:46:48.515172   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:46:48.515209   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:46:48.515223   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:46:48.515234   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:46:48.515247   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:46:48.515261   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:46:48.515272   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:46:48.515285   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:46:48.515295   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:46:48.515307   45954 system_pods.go:74] duration metric: took 18.305048ms to wait for pod list to return data ...
	I0914 22:46:48.515320   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:46:48.518842   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:46:48.518875   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:46:48.518888   45954 node_conditions.go:105] duration metric: took 3.562448ms to run NodePressure ...
	I0914 22:46:48.518908   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:50.951051   45954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.432118027s)
	I0914 22:46:50.951087   45954 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959708   45954 kubeadm.go:787] kubelet initialised
	I0914 22:46:50.959735   45954 kubeadm.go:788] duration metric: took 8.637125ms waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959745   45954 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:50.966214   45954 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.975076   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975106   45954 pod_ready.go:81] duration metric: took 8.863218ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.975118   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975129   45954 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.982438   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982471   45954 pod_ready.go:81] duration metric: took 7.330437ms waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.982485   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982493   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.991067   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991102   45954 pod_ready.go:81] duration metric: took 8.574268ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.991115   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991125   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.006696   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006732   45954 pod_ready.go:81] duration metric: took 15.595604ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.006745   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006755   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.354645   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354678   45954 pod_ready.go:81] duration metric: took 347.913938ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.354690   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354702   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.754959   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.754998   45954 pod_ready.go:81] duration metric: took 400.283619ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.755012   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.755022   45954 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:52.156253   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156299   45954 pod_ready.go:81] duration metric: took 401.260791ms waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:52.156314   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156327   45954 pod_ready.go:38] duration metric: took 1.196571114s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:52.156352   45954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:46:52.169026   45954 ops.go:34] apiserver oom_adj: -16
	I0914 22:46:52.169049   45954 kubeadm.go:640] restartCluster took 23.325317121s
	I0914 22:46:52.169059   45954 kubeadm.go:406] StartCluster complete in 23.364799998s
	I0914 22:46:52.169079   45954 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.169161   45954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:46:52.171787   45954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.172077   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:46:52.172229   45954 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:46:52.172310   45954 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172332   45954 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-799144"
	I0914 22:46:52.172325   45954 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799144"
	W0914 22:46:52.172340   45954 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:46:52.172347   45954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799144"
	I0914 22:46:52.172351   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:52.172394   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.172394   45954 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172424   45954 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.172436   45954 addons.go:240] addon metrics-server should already be in state true
	I0914 22:46:52.172500   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.173205   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173252   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173383   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173451   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173744   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173822   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.178174   45954 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-799144" context rescaled to 1 replicas
	I0914 22:46:52.178208   45954 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:46:52.180577   45954 out.go:177] * Verifying Kubernetes components...
	I0914 22:46:52.182015   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:46:52.194030   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0914 22:46:52.194040   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38817
	I0914 22:46:52.194506   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.194767   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.195059   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195078   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195219   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195235   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195420   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.195642   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.195715   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.196346   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.196392   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.198560   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0914 22:46:52.199130   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.199612   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.199641   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.199995   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.200530   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.200575   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.206536   45954 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.206558   45954 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:46:52.206584   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.206941   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.206973   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.215857   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0914 22:46:52.216266   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.216801   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.216825   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.217297   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.217484   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.220211   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40683
	I0914 22:46:52.220740   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.221296   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.221314   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.221798   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.221986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.222185   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.224162   45954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:46:52.224261   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.225483   45954 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.225494   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:46:52.225511   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.225526   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0914 22:46:52.227067   45954 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:46:52.225976   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.228337   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:46:52.228354   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:46:52.228373   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.228750   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.228764   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.228959   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229601   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.229674   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.229702   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229908   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.230068   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.230171   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.230203   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.230280   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.230503   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.232673   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233097   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.233153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.233536   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.233684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.233821   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.251500   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43473
	I0914 22:46:52.252069   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.252702   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.252722   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.253171   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.253419   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.255233   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.255574   45954 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.255591   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:46:52.255609   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.258620   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.259178   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259379   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.259584   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.259754   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.259961   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.350515   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.367291   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:46:52.367309   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:46:52.413141   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:46:52.413170   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:46:52.419647   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.462672   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:52.462698   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:46:52.519331   45954 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:46:52.519330   45954 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:52.530851   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:53.719523   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368967292s)
	I0914 22:46:53.719575   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719582   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299890259s)
	I0914 22:46:53.719616   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719638   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.719589   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720079   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720083   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720097   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720101   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720107   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720111   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720121   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720080   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720404   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720414   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720425   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720501   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720525   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720538   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720553   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720804   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720822   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.721724   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.190817165s)
	I0914 22:46:53.721771   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.721784   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.722084   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.722100   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.722089   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.722115   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.722128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.723592   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.723602   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.723614   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.723631   45954 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-799144"
	I0914 22:46:53.725666   45954 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:46:48.840421   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.179960   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.180026   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.185490   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:49.194744   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:49.205937   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210532   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210582   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.215917   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:49.225393   46412 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:49.229604   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:49.234795   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:49.239907   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:49.245153   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:49.250558   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:49.256142   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:49.261518   46412 kubeadm.go:404] StartCluster: {Name:embed-certs-588699 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:49.261618   46412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:49.261687   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:49.291460   46412 cri.go:89] found id: ""
	I0914 22:46:49.291560   46412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:49.300496   46412 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:49.300558   46412 kubeadm.go:636] restartCluster start
	I0914 22:46:49.300616   46412 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:49.309827   46412 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.311012   46412 kubeconfig.go:92] found "embed-certs-588699" server: "https://192.168.61.205:8443"
	I0914 22:46:49.313336   46412 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:49.321470   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.321528   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.332257   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.332275   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.332320   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.345427   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.846146   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.846240   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.859038   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.345492   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.345583   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.358070   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.845544   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.845605   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.861143   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.345602   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.345675   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.357406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.845964   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.846082   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.860079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.346093   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.346159   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.360952   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.845612   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.845717   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.860504   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:53.345991   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.360947   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.936297   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:51.936809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:51.936840   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:51.936747   47160 retry.go:31] will retry after 2.284330296s: waiting for machine to come up
	I0914 22:46:54.222960   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:54.223478   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:54.223530   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:54.223417   47160 retry.go:31] will retry after 3.537695113s: waiting for machine to come up
	I0914 22:46:53.726984   45954 addons.go:502] enable addons completed in 1.554762762s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:46:54.641725   45954 node_ready.go:58] node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:57.141217   45954 node_ready.go:49] node "default-k8s-diff-port-799144" has status "Ready":"True"
	I0914 22:46:57.141240   45954 node_ready.go:38] duration metric: took 4.621872993s waiting for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:57.141250   45954 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:57.151019   45954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162159   45954 pod_ready.go:92] pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace has status "Ready":"True"
	I0914 22:46:57.162180   45954 pod_ready.go:81] duration metric: took 11.133949ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162189   45954 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:53.845734   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.845815   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.858406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.346078   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.346138   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.360079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.845738   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.845801   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.861945   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.346533   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.346627   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.360445   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.845577   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.845681   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.856800   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.346374   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.346461   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.357724   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.846264   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.846376   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.857963   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.346006   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.357336   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.845877   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.845944   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.857310   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:58.345855   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.345925   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.357766   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.762315   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:57.762689   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:57.762714   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:57.762651   47160 retry.go:31] will retry after 3.773493672s: waiting for machine to come up
	I0914 22:46:59.185077   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:01.185320   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:02.912525   45407 start.go:369] acquired machines lock for "no-preload-344363" in 55.358672707s
	I0914 22:47:02.912580   45407 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:47:02.912592   45407 fix.go:54] fixHost starting: 
	I0914 22:47:02.913002   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:47:02.913035   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:47:02.932998   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0914 22:47:02.933535   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:47:02.933956   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:47:02.933977   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:47:02.934303   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:47:02.934484   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:02.934627   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:47:02.936412   45407 fix.go:102] recreateIfNeeded on no-preload-344363: state=Stopped err=<nil>
	I0914 22:47:02.936438   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	W0914 22:47:02.936601   45407 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:47:02.938235   45407 out.go:177] * Restarting existing kvm2 VM for "no-preload-344363" ...
	I0914 22:46:58.845728   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.845806   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.859436   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:59.322167   46412 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:59.322206   46412 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:59.322218   46412 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:59.322278   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:59.352268   46412 cri.go:89] found id: ""
	I0914 22:46:59.352371   46412 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:59.366742   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:59.374537   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:59.374598   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382227   46412 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382251   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:59.486171   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.268311   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.462362   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.528925   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.601616   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:00.601697   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:00.623311   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.140972   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.640574   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.141044   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.640374   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.140881   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.166662   46412 api_server.go:72] duration metric: took 2.565044214s to wait for apiserver process to appear ...
	I0914 22:47:03.166688   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:03.166703   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:01.540578   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541058   46713 main.go:141] libmachine: (old-k8s-version-930717) Found IP for machine: 192.168.72.70
	I0914 22:47:01.541095   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has current primary IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541106   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserving static IP address...
	I0914 22:47:01.541552   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserved static IP address: 192.168.72.70
	I0914 22:47:01.541579   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting for SSH to be available...
	I0914 22:47:01.541613   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.541646   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | skip adding static IP to network mk-old-k8s-version-930717 - found existing host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"}
	I0914 22:47:01.541672   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Getting to WaitForSSH function...
	I0914 22:47:01.543898   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544285   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.544317   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544428   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH client type: external
	I0914 22:47:01.544451   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa (-rw-------)
	I0914 22:47:01.544499   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:01.544518   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | About to run SSH command:
	I0914 22:47:01.544552   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | exit 0
	I0914 22:47:01.639336   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:01.639694   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetConfigRaw
	I0914 22:47:01.640324   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.642979   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643345   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.643389   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643643   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:47:01.643833   46713 machine.go:88] provisioning docker machine ...
	I0914 22:47:01.643855   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:01.644085   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644249   46713 buildroot.go:166] provisioning hostname "old-k8s-version-930717"
	I0914 22:47:01.644272   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644434   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.646429   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.646771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.646819   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.647008   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.647209   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647360   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647536   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.647737   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.648245   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.648270   46713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-930717 && echo "old-k8s-version-930717" | sudo tee /etc/hostname
	I0914 22:47:01.789438   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-930717
	
	I0914 22:47:01.789472   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.792828   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793229   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.793277   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793459   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.793644   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793778   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793953   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.794120   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.794459   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.794478   46713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-930717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-930717/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-930717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:01.928496   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:01.928536   46713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:01.928567   46713 buildroot.go:174] setting up certificates
	I0914 22:47:01.928586   46713 provision.go:83] configureAuth start
	I0914 22:47:01.928609   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.928914   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.931976   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932368   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.932398   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932542   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.934939   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935311   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.935344   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935480   46713 provision.go:138] copyHostCerts
	I0914 22:47:01.935537   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:01.935548   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:01.935620   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:01.935775   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:01.935789   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:01.935824   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:01.935970   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:01.935981   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:01.936010   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:01.936086   46713 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-930717 san=[192.168.72.70 192.168.72.70 localhost 127.0.0.1 minikube old-k8s-version-930717]
	I0914 22:47:02.167446   46713 provision.go:172] copyRemoteCerts
	I0914 22:47:02.167510   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:02.167534   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.170442   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.170862   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.170900   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.171089   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.171302   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.171496   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.171645   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.267051   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:02.289098   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 22:47:02.312189   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:02.334319   46713 provision.go:86] duration metric: configureAuth took 405.716896ms
	I0914 22:47:02.334346   46713 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:02.334555   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:47:02.334638   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.337255   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337605   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.337637   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.337949   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338100   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338240   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.338384   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.338859   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.338890   46713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:02.654307   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:02.654332   46713 machine.go:91] provisioned docker machine in 1.010485195s
	I0914 22:47:02.654345   46713 start.go:300] post-start starting for "old-k8s-version-930717" (driver="kvm2")
	I0914 22:47:02.654358   46713 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:02.654382   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.654747   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:02.654782   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.657773   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658153   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.658182   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658425   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.658630   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.658812   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.659001   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.750387   46713 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:02.754444   46713 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:02.754468   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:02.754545   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:02.754654   46713 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:02.754762   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:02.765781   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:02.788047   46713 start.go:303] post-start completed in 133.686385ms
	I0914 22:47:02.788072   46713 fix.go:56] fixHost completed within 20.927830884s
	I0914 22:47:02.788098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.791051   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791408   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.791441   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791628   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.791840   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792041   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792215   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.792383   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.792817   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.792836   46713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:02.912359   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731622.856601606
	
	I0914 22:47:02.912381   46713 fix.go:206] guest clock: 1694731622.856601606
	I0914 22:47:02.912391   46713 fix.go:219] Guest: 2023-09-14 22:47:02.856601606 +0000 UTC Remote: 2023-09-14 22:47:02.788077838 +0000 UTC m=+102.306332554 (delta=68.523768ms)
	I0914 22:47:02.912413   46713 fix.go:190] guest clock delta is within tolerance: 68.523768ms
	I0914 22:47:02.912424   46713 start.go:83] releasing machines lock for "old-k8s-version-930717", held for 21.052207532s
	I0914 22:47:02.912457   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.912730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:02.915769   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916200   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.916265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916453   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917073   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917245   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917352   46713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:02.917397   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.917535   46713 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:02.917563   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.920256   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920363   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920656   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920695   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920724   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920744   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920959   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921261   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921282   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921431   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921489   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921567   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.921635   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:03.014070   46713 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:03.047877   46713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:03.192347   46713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:03.200249   46713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:03.200324   46713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:03.215110   46713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:03.215138   46713 start.go:469] detecting cgroup driver to use...
	I0914 22:47:03.215201   46713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:03.228736   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:03.241326   46713 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:03.241377   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:03.253001   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:03.264573   46713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:03.371107   46713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:03.512481   46713 docker.go:212] disabling docker service ...
	I0914 22:47:03.512554   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:03.526054   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:03.537583   46713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:03.662087   46713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:03.793448   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:03.807574   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:03.828240   46713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0914 22:47:03.828311   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.842435   46713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:03.842490   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.856199   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.867448   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.878222   46713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:03.891806   46713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:03.899686   46713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:03.899740   46713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:03.912584   46713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:03.920771   46713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:04.040861   46713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:04.230077   46713 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:04.230147   46713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:04.235664   46713 start.go:537] Will wait 60s for crictl version
	I0914 22:47:04.235726   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:04.239737   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:04.279680   46713 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:04.279755   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.329363   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.389025   46713 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0914 22:47:02.939505   45407 main.go:141] libmachine: (no-preload-344363) Calling .Start
	I0914 22:47:02.939701   45407 main.go:141] libmachine: (no-preload-344363) Ensuring networks are active...
	I0914 22:47:02.940415   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network default is active
	I0914 22:47:02.940832   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network mk-no-preload-344363 is active
	I0914 22:47:02.941287   45407 main.go:141] libmachine: (no-preload-344363) Getting domain xml...
	I0914 22:47:02.942103   45407 main.go:141] libmachine: (no-preload-344363) Creating domain...
	I0914 22:47:04.410207   45407 main.go:141] libmachine: (no-preload-344363) Waiting to get IP...
	I0914 22:47:04.411192   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.411669   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.411744   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.411647   47373 retry.go:31] will retry after 198.435142ms: waiting for machine to come up
	I0914 22:47:04.612435   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.612957   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.613025   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.612934   47373 retry.go:31] will retry after 350.950211ms: waiting for machine to come up
	I0914 22:47:04.965570   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.966332   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.966458   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.966377   47373 retry.go:31] will retry after 398.454996ms: waiting for machine to come up
	I0914 22:47:04.390295   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:04.393815   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394249   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:04.394282   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394543   46713 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:04.398850   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:04.411297   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:47:04.411363   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:04.443950   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:04.444023   46713 ssh_runner.go:195] Run: which lz4
	I0914 22:47:04.448422   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:47:04.453479   46713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:47:04.453505   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0914 22:47:03.686086   45954 pod_ready.go:92] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.686112   45954 pod_ready.go:81] duration metric: took 6.523915685s waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.686125   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692434   45954 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.692454   45954 pod_ready.go:81] duration metric: took 6.320818ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692466   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698065   45954 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.698088   45954 pod_ready.go:81] duration metric: took 5.613243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698100   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703688   45954 pod_ready.go:92] pod "kube-proxy-j2qmv" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.703706   45954 pod_ready.go:81] duration metric: took 5.599421ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703718   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708487   45954 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.708505   45954 pod_ready.go:81] duration metric: took 4.779322ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708516   45954 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:05.993620   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:07.475579   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.475617   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:07.475631   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:07.531335   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.531366   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:08.032057   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.039350   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.039384   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:08.531559   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.538857   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.538891   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:09.031899   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:09.037891   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:47:09.047398   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:47:09.047426   46412 api_server.go:131] duration metric: took 5.880732639s to wait for apiserver health ...
	I0914 22:47:09.047434   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:47:09.047440   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:09.049137   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:05.366070   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.366812   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.366844   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.366740   47373 retry.go:31] will retry after 471.857141ms: waiting for machine to come up
	I0914 22:47:05.840519   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.841198   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.841229   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.841150   47373 retry.go:31] will retry after 632.189193ms: waiting for machine to come up
	I0914 22:47:06.475175   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:06.475769   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:06.475800   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:06.475704   47373 retry.go:31] will retry after 866.407813ms: waiting for machine to come up
	I0914 22:47:07.344343   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:07.344865   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:07.344897   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:07.344815   47373 retry.go:31] will retry after 1.101301607s: waiting for machine to come up
	I0914 22:47:08.448452   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:08.449070   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:08.449111   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:08.449014   47373 retry.go:31] will retry after 995.314765ms: waiting for machine to come up
	I0914 22:47:09.446294   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:09.446708   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:09.446740   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:09.446653   47373 retry.go:31] will retry after 1.180552008s: waiting for machine to come up
	I0914 22:47:05.984485   46713 crio.go:444] Took 1.536109 seconds to copy over tarball
	I0914 22:47:05.984562   46713 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:47:09.247825   46713 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.263230608s)
	I0914 22:47:09.247858   46713 crio.go:451] Took 3.263345 seconds to extract the tarball
	I0914 22:47:09.247871   46713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:47:09.289821   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:09.340429   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:09.340463   46713 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:09.340544   46713 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.340568   46713 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0914 22:47:09.340535   46713 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.340531   46713 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.340789   46713 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.340811   46713 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.340886   46713 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.340793   46713 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.342655   46713 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.342658   46713 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.342636   46713 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.342635   46713 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.342793   46713 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.561063   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0914 22:47:09.564079   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.564246   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.564957   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.566014   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.571757   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.578469   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.687502   46713 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0914 22:47:09.687548   46713 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0914 22:47:09.687591   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.727036   46713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0914 22:47:09.727085   46713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.727140   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0914 22:47:09.737952   46713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0914 22:47:09.737986   46713 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.737990   46713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0914 22:47:09.738002   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738013   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738023   46713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.738063   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.744728   46713 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0914 22:47:09.744768   46713 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.744813   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753014   46713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0914 22:47:09.753055   46713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.753080   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753104   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.753056   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0914 22:47:09.753149   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.753193   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.753213   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.758372   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.758544   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.875271   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0914 22:47:09.875299   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0914 22:47:09.875357   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0914 22:47:09.875382   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.875404   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0914 22:47:09.876393   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0914 22:47:09.878339   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0914 22:47:09.878491   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0914 22:47:09.881457   46713 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0914 22:47:09.881475   46713 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.881521   46713 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0914 22:47:08.496805   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.993044   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:09.050966   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:09.061912   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:09.096783   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:09.111938   46412 system_pods.go:59] 8 kube-system pods found
	I0914 22:47:09.111976   46412 system_pods.go:61] "coredns-5dd5756b68-zrd8r" [5b5f18a0-d6ee-42f2-b31a-4f8555b50388] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:09.111988   46412 system_pods.go:61] "etcd-embed-certs-588699" [b32d61b5-8c3f-4980-9f0f-c08630be9c36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:47:09.112001   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [58ac976e-7a8c-4aee-9ee5-b92bd7e897b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:47:09.112015   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [3f9587f5-fe32-446a-a4c9-cb679b177937] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:47:09.112036   46412 system_pods.go:61] "kube-proxy-l8pq9" [4aecae33-dcd9-4ec6-a537-ecbb076c44d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:47:09.112052   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [f23ab185-f4c2-4e39-936d-51d51538b0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:47:09.112066   46412 system_pods.go:61] "metrics-server-57f55c9bc5-zvk82" [3c48277c-4604-4a83-82ea-2776cf0d0537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:47:09.112077   46412 system_pods.go:61] "storage-provisioner" [f0acbbe1-c326-4863-ae2e-d2d3e5be07c1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:47:09.112090   46412 system_pods.go:74] duration metric: took 15.280254ms to wait for pod list to return data ...
	I0914 22:47:09.112103   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:09.119686   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:09.119725   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:09.119747   46412 node_conditions.go:105] duration metric: took 7.637688ms to run NodePressure ...
	I0914 22:47:09.119768   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:09.407351   46412 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414338   46412 kubeadm.go:787] kubelet initialised
	I0914 22:47:09.414361   46412 kubeadm.go:788] duration metric: took 6.974234ms waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414369   46412 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:47:09.424482   46412 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:12.171133   46412 pod_ready.go:102] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.628919   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:10.629418   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:10.629449   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:10.629366   47373 retry.go:31] will retry after 1.486310454s: waiting for machine to come up
	I0914 22:47:12.117762   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:12.118350   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:12.118381   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:12.118295   47373 retry.go:31] will retry after 2.678402115s: waiting for machine to come up
	I0914 22:47:14.798599   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:14.799127   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:14.799160   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:14.799060   47373 retry.go:31] will retry after 2.724185493s: waiting for machine to come up
	I0914 22:47:10.647242   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:12.244764   46713 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.363213143s)
	I0914 22:47:12.244798   46713 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0914 22:47:12.244823   46713 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.013457524s)
	I0914 22:47:12.244888   46713 cache_images.go:92] LoadImages completed in 2.904411161s
	W0914 22:47:12.244978   46713 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0914 22:47:12.245070   46713 ssh_runner.go:195] Run: crio config
	I0914 22:47:12.328636   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:12.328663   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:12.328687   46713 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:12.328710   46713 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-930717 NodeName:old-k8s-version-930717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 22:47:12.328882   46713 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-930717"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-930717
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:12.328984   46713 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-930717 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:12.329062   46713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0914 22:47:12.339084   46713 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:12.339169   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:12.348354   46713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 22:47:12.369083   46713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:12.388242   46713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0914 22:47:12.407261   46713 ssh_runner.go:195] Run: grep 192.168.72.70	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:12.411055   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:12.425034   46713 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717 for IP: 192.168.72.70
	I0914 22:47:12.425070   46713 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:12.425236   46713 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:12.425283   46713 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:12.425372   46713 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.key
	I0914 22:47:12.425451   46713 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key.382dacf3
	I0914 22:47:12.425512   46713 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key
	I0914 22:47:12.425642   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:12.425671   46713 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:12.425685   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:12.425708   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:12.425732   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:12.425751   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:12.425789   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:12.426339   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:12.456306   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:47:12.486038   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:12.520941   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:47:12.552007   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:12.589620   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:12.619358   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:12.650395   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:12.678898   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:12.704668   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:12.730499   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:12.755286   46713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:12.773801   46713 ssh_runner.go:195] Run: openssl version
	I0914 22:47:12.781147   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:12.793953   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799864   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799922   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.806881   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:12.817936   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:12.830758   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836538   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836613   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.843368   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:12.855592   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:12.866207   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871317   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871368   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.878438   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:12.891012   46713 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:12.895887   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:12.902284   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:12.909482   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:12.916524   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:12.924045   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:12.929935   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:12.937292   46713 kubeadm.go:404] StartCluster: {Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:12.937417   46713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:12.937470   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:12.975807   46713 cri.go:89] found id: ""
	I0914 22:47:12.975902   46713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:12.988356   46713 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:12.988379   46713 kubeadm.go:636] restartCluster start
	I0914 22:47:12.988434   46713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:13.000294   46713 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.001492   46713 kubeconfig.go:92] found "old-k8s-version-930717" server: "https://192.168.72.70:8443"
	I0914 22:47:13.008583   46713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:13.023004   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.023065   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.037604   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.037625   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.037671   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.048939   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.549653   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.549746   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.561983   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.049481   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.049588   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.064694   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.549101   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.549195   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.564858   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:15.049112   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.049206   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.063428   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:12.993654   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:14.995358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:13.946979   46412 pod_ready.go:92] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:13.947004   46412 pod_ready.go:81] duration metric: took 4.522495708s waiting for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:13.947013   46412 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:15.968061   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:18.465595   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:17.526472   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:17.526915   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:17.526946   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:17.526867   47373 retry.go:31] will retry after 3.587907236s: waiting for machine to come up
	I0914 22:47:15.549179   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.549273   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.561977   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.049593   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.049678   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.063654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.549178   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.549248   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.561922   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.049041   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.049131   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.062442   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.550005   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.550066   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.561254   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.049855   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.049932   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.062226   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.549845   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.549941   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.561219   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.049739   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.049829   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.061225   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.550035   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.550112   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.561546   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:20.049979   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.050080   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.061478   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.489830   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:19.490802   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.490931   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.118871   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119369   45407 main.go:141] libmachine: (no-preload-344363) Found IP for machine: 192.168.39.60
	I0914 22:47:21.119391   45407 main.go:141] libmachine: (no-preload-344363) Reserving static IP address...
	I0914 22:47:21.119418   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has current primary IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119860   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.119888   45407 main.go:141] libmachine: (no-preload-344363) Reserved static IP address: 192.168.39.60
	I0914 22:47:21.119906   45407 main.go:141] libmachine: (no-preload-344363) DBG | skip adding static IP to network mk-no-preload-344363 - found existing host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"}
	I0914 22:47:21.119931   45407 main.go:141] libmachine: (no-preload-344363) DBG | Getting to WaitForSSH function...
	I0914 22:47:21.119949   45407 main.go:141] libmachine: (no-preload-344363) Waiting for SSH to be available...
	I0914 22:47:21.121965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122282   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.122312   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122392   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH client type: external
	I0914 22:47:21.122429   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa (-rw-------)
	I0914 22:47:21.122482   45407 main.go:141] libmachine: (no-preload-344363) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:21.122510   45407 main.go:141] libmachine: (no-preload-344363) DBG | About to run SSH command:
	I0914 22:47:21.122521   45407 main.go:141] libmachine: (no-preload-344363) DBG | exit 0
	I0914 22:47:21.206981   45407 main.go:141] libmachine: (no-preload-344363) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:21.207366   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetConfigRaw
	I0914 22:47:21.208066   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.210323   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210607   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.210639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210795   45407 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/config.json ...
	I0914 22:47:21.211016   45407 machine.go:88] provisioning docker machine ...
	I0914 22:47:21.211036   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:21.211258   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211431   45407 buildroot.go:166] provisioning hostname "no-preload-344363"
	I0914 22:47:21.211455   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211629   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.213574   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.213887   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.213921   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.214015   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.214181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214338   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.214648   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.215041   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.215056   45407 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-344363 && echo "no-preload-344363" | sudo tee /etc/hostname
	I0914 22:47:21.347323   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-344363
	
	I0914 22:47:21.347358   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.350445   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.350846   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.350882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.351144   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.351393   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351599   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351766   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.351944   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.352264   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.352291   45407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-344363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-344363/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-344363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:21.471619   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:21.471648   45407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:21.471671   45407 buildroot.go:174] setting up certificates
	I0914 22:47:21.471683   45407 provision.go:83] configureAuth start
	I0914 22:47:21.471696   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.472019   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.474639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475113   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.475141   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475293   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.477627   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.477976   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.478009   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.478148   45407 provision.go:138] copyHostCerts
	I0914 22:47:21.478189   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:21.478198   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:21.478249   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:21.478336   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:21.478344   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:21.478362   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:21.478416   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:21.478423   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:21.478439   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:21.478482   45407 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.no-preload-344363 san=[192.168.39.60 192.168.39.60 localhost 127.0.0.1 minikube no-preload-344363]
	I0914 22:47:21.546956   45407 provision.go:172] copyRemoteCerts
	I0914 22:47:21.547006   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:21.547029   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.549773   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550217   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.550257   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550468   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.550683   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.550850   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.551050   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:21.635939   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:21.656944   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:47:21.679064   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:21.701127   45407 provision.go:86] duration metric: configureAuth took 229.434247ms
	I0914 22:47:21.701147   45407 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:21.701319   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:47:21.701381   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.704100   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704475   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.704512   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704672   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.704865   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705046   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705218   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.705382   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.705828   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.705849   45407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:22.037291   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:22.037337   45407 machine.go:91] provisioned docker machine in 826.295956ms
	I0914 22:47:22.037350   45407 start.go:300] post-start starting for "no-preload-344363" (driver="kvm2")
	I0914 22:47:22.037363   45407 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:22.037396   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.037704   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:22.037729   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.040372   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040729   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.040757   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040896   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.041082   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.041266   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.041373   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.129612   45407 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:22.133522   45407 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:22.133550   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:22.133625   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:22.133715   45407 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:22.133844   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:22.142411   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:22.165470   45407 start.go:303] post-start completed in 128.106418ms
	I0914 22:47:22.165496   45407 fix.go:56] fixHost completed within 19.252903923s
	I0914 22:47:22.165524   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.168403   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168696   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.168731   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168894   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.169095   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169248   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169384   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.169571   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:22.169891   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:22.169904   45407 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:22.284038   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731642.258576336
	
	I0914 22:47:22.284062   45407 fix.go:206] guest clock: 1694731642.258576336
	I0914 22:47:22.284071   45407 fix.go:219] Guest: 2023-09-14 22:47:22.258576336 +0000 UTC Remote: 2023-09-14 22:47:22.16550191 +0000 UTC m=+357.203571663 (delta=93.074426ms)
	I0914 22:47:22.284107   45407 fix.go:190] guest clock delta is within tolerance: 93.074426ms
	I0914 22:47:22.284117   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 19.371563772s
	I0914 22:47:22.284146   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.284388   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:22.286809   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287091   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.287133   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287288   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287782   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287978   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.288050   45407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:22.288085   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.288176   45407 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:22.288197   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.290608   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.290936   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.290965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291067   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291157   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291345   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291516   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.291529   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.291554   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291649   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.291706   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291837   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291975   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.292158   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.417570   45407 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:22.423145   45407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:22.563752   45407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:22.569625   45407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:22.569718   45407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:22.585504   45407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:22.585527   45407 start.go:469] detecting cgroup driver to use...
	I0914 22:47:22.585610   45407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:22.599600   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:22.612039   45407 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:22.612080   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:22.624817   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:22.637141   45407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:22.744181   45407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:22.864420   45407 docker.go:212] disabling docker service ...
	I0914 22:47:22.864490   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:22.877360   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:22.888786   45407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:23.000914   45407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:23.137575   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:23.150682   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:23.167898   45407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:47:23.167966   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.176916   45407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:23.176991   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.185751   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.195260   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.204852   45407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:23.214303   45407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:23.222654   45407 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:23.222717   45407 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:23.235654   45407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:23.244081   45407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:23.357943   45407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:23.521315   45407 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:23.521410   45407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:23.526834   45407 start.go:537] Will wait 60s for crictl version
	I0914 22:47:23.526889   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:23.530250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:23.562270   45407 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:23.562358   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.606666   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.658460   45407 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:47:20.467600   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:20.964310   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.964331   46412 pod_ready.go:81] duration metric: took 7.017312906s waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.964349   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968539   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.968555   46412 pod_ready.go:81] duration metric: took 4.200242ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968563   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973180   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.973194   46412 pod_ready.go:81] duration metric: took 4.625123ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973206   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977403   46412 pod_ready.go:92] pod "kube-proxy-l8pq9" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.977418   46412 pod_ready.go:81] duration metric: took 4.206831ms waiting for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977425   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375236   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:22.375259   46412 pod_ready.go:81] duration metric: took 1.397826525s waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375271   46412 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:23.659885   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:23.662745   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663195   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:23.663228   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663452   45407 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:23.667637   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:23.678881   45407 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:47:23.678929   45407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:23.708267   45407 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:47:23.708309   45407 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:23.708390   45407 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.708421   45407 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.708424   45407 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0914 22:47:23.708437   45407 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.708425   45407 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.708537   45407 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.708403   45407 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.708393   45407 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.709903   45407 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.709887   45407 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.709899   45407 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.710189   45407 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.710260   45407 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0914 22:47:23.710346   45407 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.917134   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.929080   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.929396   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0914 22:47:23.935684   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.936236   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.937239   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.937622   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.006429   45407 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0914 22:47:24.006479   45407 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.006524   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.102547   45407 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0914 22:47:24.102597   45407 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.102641   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201012   45407 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0914 22:47:24.201050   45407 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.201100   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201106   45407 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0914 22:47:24.201138   45407 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.201156   45407 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0914 22:47:24.201203   45407 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.201227   45407 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0914 22:47:24.201282   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.201294   45407 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.201329   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201236   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201180   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.206295   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.263389   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0914 22:47:24.263451   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.263501   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.263513   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:24.263534   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0914 22:47:24.263573   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.263665   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.273844   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0914 22:47:24.273932   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:24.338823   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0914 22:47:24.338944   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:24.344560   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0914 22:47:24.344580   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0914 22:47:24.344594   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344635   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344659   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:24.344678   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0914 22:47:24.344723   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0914 22:47:24.344745   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0914 22:47:24.344816   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:24.346975   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0914 22:47:24.953835   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:20.549479   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.549585   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.563121   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.049732   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.049807   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.061447   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.549012   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.549073   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.561653   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.049517   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.049582   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.062280   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.549943   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.550017   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.562654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:23.024019   46713 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:23.024043   46713 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:23.024054   46713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:23.024101   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:23.060059   46713 cri.go:89] found id: ""
	I0914 22:47:23.060116   46713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:23.078480   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:23.087665   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:23.087714   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096513   46713 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096535   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:23.205072   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.081881   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.285041   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.364758   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.468127   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:24.468201   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:24.483354   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.007133   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.507231   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:23.992945   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.492600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:24.475872   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.978889   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.317110   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.97244294s)
	I0914 22:47:26.317145   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0914 22:47:26.317167   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317174   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.972489589s)
	I0914 22:47:26.317202   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0914 22:47:26.317215   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317248   45407 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.363386448s)
	I0914 22:47:26.317281   45407 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 22:47:26.317319   45407 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.317366   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:26.317213   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.972376756s)
	I0914 22:47:26.317426   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0914 22:47:28.397989   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (2.080744487s)
	I0914 22:47:28.398021   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0914 22:47:28.398031   45407 ssh_runner.go:235] Completed: which crictl: (2.080647539s)
	I0914 22:47:28.398048   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398093   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398095   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.006554   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:26.032232   46713 api_server.go:72] duration metric: took 1.564104415s to wait for apiserver process to appear ...
	I0914 22:47:26.032255   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:26.032270   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:28.992292   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.490442   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.033000   46713 api_server.go:269] stopped: https://192.168.72.70:8443/healthz: Get "https://192.168.72.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 22:47:31.033044   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:31.568908   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:31.568937   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:32.069915   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.080424   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.080456   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:32.570110   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.580879   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.580918   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:33.069247   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:33.077664   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:47:33.086933   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:47:33.086960   46713 api_server.go:131] duration metric: took 7.054699415s to wait for apiserver health ...
	I0914 22:47:33.086973   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:33.086981   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:33.088794   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:29.476304   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.975459   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:30.974281   45407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.57612291s)
	I0914 22:47:30.974347   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 22:47:30.974381   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.576263058s)
	I0914 22:47:30.974403   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0914 22:47:30.974427   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:30.974455   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:30.974470   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:33.737309   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.762815322s)
	I0914 22:47:33.737355   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0914 22:47:33.737379   45407 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.737322   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.762844826s)
	I0914 22:47:33.737464   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 22:47:33.737436   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.090357   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:33.103371   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:33.123072   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:33.133238   46713 system_pods.go:59] 7 kube-system pods found
	I0914 22:47:33.133268   46713 system_pods.go:61] "coredns-5644d7b6d9-8sbjk" [638464d2-96db-460d-bf82-0ee79df816da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:33.133278   46713 system_pods.go:61] "etcd-old-k8s-version-930717" [4b38f48a-fc4a-43d5-a2b4-414aff712c1b] Running
	I0914 22:47:33.133286   46713 system_pods.go:61] "kube-apiserver-old-k8s-version-930717" [523a3adc-8c68-4980-8a53-133476ce2488] Running
	I0914 22:47:33.133294   46713 system_pods.go:61] "kube-controller-manager-old-k8s-version-930717" [36fd7e01-4a5d-446f-8370-f7a7e886571c] Running
	I0914 22:47:33.133306   46713 system_pods.go:61] "kube-proxy-l4qz4" [c61d0471-0a9e-4662-b723-39944c8b3c31] Running
	I0914 22:47:33.133314   46713 system_pods.go:61] "kube-scheduler-old-k8s-version-930717" [f6d45807-c7f2-4545-b732-45dbd945c660] Running
	I0914 22:47:33.133323   46713 system_pods.go:61] "storage-provisioner" [2956bea1-80f8-4f61-a635-4332d4e3042e] Running
	I0914 22:47:33.133331   46713 system_pods.go:74] duration metric: took 10.233824ms to wait for pod list to return data ...
	I0914 22:47:33.133343   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:33.137733   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:33.137765   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:33.137776   46713 node_conditions.go:105] duration metric: took 4.42667ms to run NodePressure ...
	I0914 22:47:33.137795   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:33.590921   46713 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:33.597720   46713 retry.go:31] will retry after 159.399424ms: kubelet not initialised
	I0914 22:47:33.767747   46713 retry.go:31] will retry after 191.717885ms: kubelet not initialised
	I0914 22:47:33.967120   46713 retry.go:31] will retry after 382.121852ms: kubelet not initialised
	I0914 22:47:34.354106   46713 retry.go:31] will retry after 1.055800568s: kubelet not initialised
	I0914 22:47:35.413704   46713 retry.go:31] will retry after 1.341728619s: kubelet not initialised
	I0914 22:47:33.993188   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.491280   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:34.475254   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.977175   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.760804   46713 retry.go:31] will retry after 2.668611083s: kubelet not initialised
	I0914 22:47:39.434688   46713 retry.go:31] will retry after 2.1019007s: kubelet not initialised
	I0914 22:47:38.994051   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.490913   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:38.998980   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.474686   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:40.530763   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.793268381s)
	I0914 22:47:40.530793   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0914 22:47:40.530820   45407 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:40.530881   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:41.888277   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.357355595s)
	I0914 22:47:41.888305   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0914 22:47:41.888338   45407 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:41.888405   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:42.537191   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 22:47:42.537244   45407 cache_images.go:123] Successfully loaded all cached images
	I0914 22:47:42.537251   45407 cache_images.go:92] LoadImages completed in 18.828927203s
	I0914 22:47:42.537344   45407 ssh_runner.go:195] Run: crio config
	I0914 22:47:42.594035   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:47:42.594056   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:42.594075   45407 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:42.594098   45407 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-344363 NodeName:no-preload-344363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:47:42.594272   45407 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-344363"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:42.594383   45407 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-344363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:42.594449   45407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:47:42.604172   45407 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:42.604243   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:42.612570   45407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0914 22:47:42.628203   45407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:42.643625   45407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0914 22:47:42.658843   45407 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:42.661922   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:42.672252   45407 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363 for IP: 192.168.39.60
	I0914 22:47:42.672279   45407 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:42.672420   45407 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:42.672462   45407 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:42.672536   45407 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.key
	I0914 22:47:42.672630   45407 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key.a014e791
	I0914 22:47:42.672693   45407 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key
	I0914 22:47:42.672828   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:42.672867   45407 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:42.672879   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:42.672915   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:42.672948   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:42.672982   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:42.673044   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:42.673593   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:42.695080   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:47:42.716844   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:42.746475   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0914 22:47:42.769289   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:42.790650   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:42.811665   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:42.833241   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:42.853851   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:42.875270   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:42.896913   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:42.917370   45407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:42.934549   45407 ssh_runner.go:195] Run: openssl version
	I0914 22:47:42.939762   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:42.949829   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954155   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954204   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.959317   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:42.968463   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:42.979023   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983436   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983502   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.988655   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:42.998288   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:43.007767   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011865   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011940   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.016837   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:43.026372   45407 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:43.030622   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:43.036026   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:43.041394   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:43.046608   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:43.051675   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:43.056621   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:43.061552   45407 kubeadm.go:404] StartCluster: {Name:no-preload-344363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:43.061645   45407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:43.061700   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:43.090894   45407 cri.go:89] found id: ""
	I0914 22:47:43.090957   45407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:43.100715   45407 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:43.100732   45407 kubeadm.go:636] restartCluster start
	I0914 22:47:43.100782   45407 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:43.109233   45407 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.110217   45407 kubeconfig.go:92] found "no-preload-344363" server: "https://192.168.39.60:8443"
	I0914 22:47:43.112442   45407 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:43.120580   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.120619   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.131224   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.131238   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.131292   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.140990   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.641661   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.641753   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.653379   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.142002   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.142077   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.154194   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.641806   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.641931   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.653795   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:41.541334   46713 retry.go:31] will retry after 2.553142131s: kubelet not initialised
	I0914 22:47:44.100647   46713 retry.go:31] will retry after 6.538244211s: kubelet not initialised
	I0914 22:47:43.995757   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.490438   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:43.974300   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.474137   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:45.141728   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.141816   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.153503   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:45.641693   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.641775   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.653204   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.141748   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.141838   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.153035   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.641294   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.641386   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.653144   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.141813   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.141915   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.152408   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.641793   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.641872   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.653228   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.141212   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.141304   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.152568   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.641805   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.641881   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.652184   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.141839   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.141909   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.152921   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.642082   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.642160   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.656837   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.991209   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:51.492672   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:48.973567   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.974964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:52.975525   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.141324   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.141399   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.153003   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:50.642032   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.642113   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.653830   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.141403   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.141486   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.152324   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.641932   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.642027   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.653279   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.141928   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.141998   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.152653   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.641151   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.641239   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.652312   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:53.121389   45407 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:53.121422   45407 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:53.121436   45407 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:53.121511   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:53.150615   45407 cri.go:89] found id: ""
	I0914 22:47:53.150681   45407 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:53.164511   45407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:53.173713   45407 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:53.173778   45407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183776   45407 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183797   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:53.310974   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.230246   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.409237   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.474183   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.572433   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:54.572581   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:54.584938   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:50.644922   46713 retry.go:31] will retry after 11.248631638s: kubelet not initialised
	I0914 22:47:53.990630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.990661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.475037   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:57.475941   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.098638   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:55.599218   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.099188   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.598826   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.621701   45407 api_server.go:72] duration metric: took 2.049267478s to wait for apiserver process to appear ...
	I0914 22:47:56.621729   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:56.621749   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622263   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:56.622301   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622682   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:57.123404   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.433050   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:48:00.433082   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:48:00.433096   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.467030   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.467073   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:00.623319   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.633882   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.633912   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.123559   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.128661   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.128691   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.623201   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.629775   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.629804   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:02.123439   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:02.131052   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:48:02.141185   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:48:02.141213   45407 api_server.go:131] duration metric: took 5.519473898s to wait for apiserver health ...
	I0914 22:48:02.141222   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:48:02.141228   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:48:02.143254   45407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:57.992038   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:59.992600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:02.144756   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:48:02.158230   45407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:48:02.182382   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:48:02.204733   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:48:02.204786   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:48:02.204801   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:48:02.204817   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:48:02.204834   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:48:02.204847   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:48:02.204859   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:48:02.204876   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:48:02.204887   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:48:02.204900   45407 system_pods.go:74] duration metric: took 22.491699ms to wait for pod list to return data ...
	I0914 22:48:02.204913   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:48:02.208661   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:48:02.208692   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:48:02.208706   45407 node_conditions.go:105] duration metric: took 3.7844ms to run NodePressure ...
	I0914 22:48:02.208731   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:48:02.454257   45407 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458848   45407 kubeadm.go:787] kubelet initialised
	I0914 22:48:02.458868   45407 kubeadm.go:788] duration metric: took 4.585034ms waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458874   45407 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:02.464634   45407 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.471350   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471371   45407 pod_ready.go:81] duration metric: took 6.714087ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.471379   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471387   45407 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.476977   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.476998   45407 pod_ready.go:81] duration metric: took 5.604627ms waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.477009   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.477019   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.483218   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483236   45407 pod_ready.go:81] duration metric: took 6.211697ms waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.483244   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483256   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.589184   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589217   45407 pod_ready.go:81] duration metric: took 105.950074ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.589227   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589236   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.987051   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987081   45407 pod_ready.go:81] duration metric: took 397.836385ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.987094   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987103   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.392835   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392865   45407 pod_ready.go:81] duration metric: took 405.754351ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.392876   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392886   45407 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.786615   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786641   45407 pod_ready.go:81] duration metric: took 393.746366ms waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.786652   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786660   45407 pod_ready.go:38] duration metric: took 1.327778716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:03.786676   45407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:48:03.798081   45407 ops.go:34] apiserver oom_adj: -16
	I0914 22:48:03.798101   45407 kubeadm.go:640] restartCluster took 20.697363165s
	I0914 22:48:03.798107   45407 kubeadm.go:406] StartCluster complete in 20.736562339s
	I0914 22:48:03.798121   45407 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.798193   45407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:48:03.799954   45407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.800200   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:48:03.800299   45407 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:48:03.800368   45407 addons.go:69] Setting storage-provisioner=true in profile "no-preload-344363"
	I0914 22:48:03.800449   45407 addons.go:231] Setting addon storage-provisioner=true in "no-preload-344363"
	W0914 22:48:03.800462   45407 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:48:03.800511   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800394   45407 addons.go:69] Setting metrics-server=true in profile "no-preload-344363"
	I0914 22:48:03.800543   45407 addons.go:231] Setting addon metrics-server=true in "no-preload-344363"
	W0914 22:48:03.800558   45407 addons.go:240] addon metrics-server should already be in state true
	I0914 22:48:03.800590   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800388   45407 addons.go:69] Setting default-storageclass=true in profile "no-preload-344363"
	I0914 22:48:03.800633   45407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-344363"
	I0914 22:48:03.800411   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:48:03.800906   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800909   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800944   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.801011   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.801054   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.800968   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.804911   45407 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-344363" context rescaled to 1 replicas
	I0914 22:48:03.804946   45407 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:48:03.807503   45407 out.go:177] * Verifying Kubernetes components...
	I0914 22:47:59.973913   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:01.974625   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:03.808768   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:48:03.816774   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0914 22:48:03.816773   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0914 22:48:03.817265   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817518   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817791   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.817821   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818011   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.818032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818223   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818407   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818431   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.818976   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.819027   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.829592   45407 addons.go:231] Setting addon default-storageclass=true in "no-preload-344363"
	W0914 22:48:03.829614   45407 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:48:03.829641   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.830013   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.830047   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.835514   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0914 22:48:03.835935   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.836447   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.836473   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.836841   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.837011   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.838909   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.843677   45407 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:48:03.845231   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:48:03.845246   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:48:03.845261   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.844291   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I0914 22:48:03.845685   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.846224   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.846242   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.846572   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.847073   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.847103   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.847332   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0914 22:48:03.848400   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.848666   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849160   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.849182   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.849263   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.849283   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849314   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.849461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.849570   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.849635   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.849682   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.850555   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.850585   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.863035   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0914 22:48:03.863559   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864010   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.864204   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0914 22:48:03.864478   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.864526   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864752   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.864936   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864955   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.865261   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.865489   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.866474   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.868300   45407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:48:03.867504   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.869841   45407 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:03.869855   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:48:03.869874   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.870067   45407 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:03.870078   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:48:03.870091   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.873462   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.873859   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.873882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874026   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874114   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.874287   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.874397   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.874903   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874949   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.874980   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.875135   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.875301   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.875486   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.956934   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:48:03.956956   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:48:03.973872   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:48:03.973896   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:48:04.002028   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.002051   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:48:04.018279   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:04.037990   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:04.047125   45407 node_ready.go:35] waiting up to 6m0s for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:04.047292   45407 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:48:04.086299   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.991926   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.991952   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992225   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992292   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992324   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992342   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992364   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992614   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992634   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992649   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992657   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992665   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992914   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992933   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:01.898769   46713 retry.go:31] will retry after 9.475485234s: kubelet not initialised
	I0914 22:48:05.528027   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.490009157s)
	I0914 22:48:05.528078   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528087   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528435   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528457   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528470   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528436   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.528481   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528802   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528824   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528829   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.600274   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.51392997s)
	I0914 22:48:05.600338   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600351   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.600645   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.600670   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.600682   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600695   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.602502   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.602513   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.602524   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.602546   45407 addons.go:467] Verifying addon metrics-server=true in "no-preload-344363"
	I0914 22:48:05.604330   45407 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 22:48:02.491577   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.995014   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.474529   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:06.474964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:05.605648   45407 addons.go:502] enable addons completed in 1.805353931s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 22:48:06.198114   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:08.199023   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:07.490770   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:09.991693   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:08.974469   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:11.474711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:10.698198   45407 node_ready.go:49] node "no-preload-344363" has status "Ready":"True"
	I0914 22:48:10.698218   45407 node_ready.go:38] duration metric: took 6.651066752s waiting for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:10.698227   45407 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:10.704694   45407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710103   45407 pod_ready.go:92] pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:10.710119   45407 pod_ready.go:81] duration metric: took 5.400404ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710128   45407 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.747445   45407 pod_ready.go:102] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.229927   45407 pod_ready.go:92] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:13.229953   45407 pod_ready.go:81] duration metric: took 2.519818297s waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:13.229966   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747126   45407 pod_ready.go:92] pod "kube-apiserver-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.747147   45407 pod_ready.go:81] duration metric: took 1.51717338s waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747157   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752397   45407 pod_ready.go:92] pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.752413   45407 pod_ready.go:81] duration metric: took 5.250049ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752420   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.380752   46713 kubeadm.go:787] kubelet initialised
	I0914 22:48:11.380783   46713 kubeadm.go:788] duration metric: took 37.789831498s waiting for restarted kubelet to initialise ...
	I0914 22:48:11.380793   46713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:11.386189   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392948   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.392970   46713 pod_ready.go:81] duration metric: took 6.75113ms waiting for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392981   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398606   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.398627   46713 pod_ready.go:81] duration metric: took 5.638835ms waiting for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398639   46713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404145   46713 pod_ready.go:92] pod "etcd-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.404174   46713 pod_ready.go:81] duration metric: took 5.527173ms waiting for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404187   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409428   46713 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.409448   46713 pod_ready.go:81] duration metric: took 5.252278ms waiting for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409461   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779225   46713 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.779252   46713 pod_ready.go:81] duration metric: took 369.782336ms waiting for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779267   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179256   46713 pod_ready.go:92] pod "kube-proxy-l4qz4" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.179277   46713 pod_ready.go:81] duration metric: took 400.003039ms waiting for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179286   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578889   46713 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.578921   46713 pod_ready.go:81] duration metric: took 399.627203ms waiting for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578935   46713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:12.491274   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:14.991146   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.991799   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.974725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.473917   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.474722   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:15.099588   45407 pod_ready.go:92] pod "kube-proxy-zzkbp" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.099612   45407 pod_ready.go:81] duration metric: took 347.18498ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.099623   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498642   45407 pod_ready.go:92] pod "kube-scheduler-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.498664   45407 pod_ready.go:81] duration metric: took 399.034277ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498678   45407 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:17.806138   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.887157   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:19.390361   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.991911   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.993133   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.474578   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.305450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:22.305521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:24.306131   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:21.885143   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.886722   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.490126   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.991185   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.974547   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.473850   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.805651   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.306125   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.384992   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.385266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.385877   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:27.991827   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.991995   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.475603   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.974568   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:31.806483   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.306121   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.886341   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.385506   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.488948   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.490950   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.989621   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.474815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.973407   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.806806   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.806988   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.886043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.386865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.991151   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:41.491384   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:39.974109   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.473010   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.808362   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.305126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.886094   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.386710   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.991121   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.992500   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:44.475120   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:46.973837   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.305212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.305740   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.806334   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.886380   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.887578   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:48.490416   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:50.990196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.474209   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.474657   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.808853   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.305742   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.888488   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.385591   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:52.990333   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.991549   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:53.974301   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:55.976250   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.474372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.807759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.304597   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.885164   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.885809   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:57.491267   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.492043   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.991231   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:00.974064   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:02.975136   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.808275   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.385492   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.385865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:05.386266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.992513   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.490253   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:04.975537   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.473413   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.306066   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.805711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.886495   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.386100   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.995545   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.490960   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:09.476367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.974480   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.807870   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.306759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:12.386166   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.990090   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.489864   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.975102   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.474761   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.475314   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:15.809041   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.305700   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:17.385490   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:19.386201   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.490727   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.493813   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.973383   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.973978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.306906   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.805781   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.806417   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:21.387171   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:23.394663   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.989981   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.998602   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.975048   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.473804   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.805993   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:25.886256   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:28.385307   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:30.386473   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.490860   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.991665   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.992373   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.475815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.973092   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.305648   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.806797   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.886577   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.386203   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.490086   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:36.490465   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:33.973662   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.974041   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.473275   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.306848   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.806295   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.388154   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.886447   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.490850   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.989734   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.473543   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.473711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:41.807197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.305572   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.385788   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.386844   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.995794   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:45.490630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.474251   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.974425   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.306070   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.805530   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.886095   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.888504   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:47.491269   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.990921   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.474354   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.973552   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:50.806526   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.807021   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.385411   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.385825   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.490166   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:54.991982   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.974372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:56.473350   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.305863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.306450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.308315   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.886560   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.886950   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.386043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.490604   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.490811   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.993715   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:58.973152   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.975078   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.474589   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.806409   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.806552   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:02.387458   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.886066   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.490551   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:06.490632   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.974290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.974714   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.810256   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.305443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.386252   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:09.887808   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.490994   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.990417   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.474207   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.973759   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.305662   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.807626   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.385387   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.386055   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.991196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.489856   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.974362   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.474890   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.305348   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.306521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.306661   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:16.386682   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:18.386805   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.491969   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.990884   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.991904   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.476052   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.973290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.806863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.810113   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:20.886118   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.388653   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:24.490861   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.991437   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.474556   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.307894   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.809126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:25.885409   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:27.886080   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.386151   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:29.489358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.491041   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.973725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.975342   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.474590   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.306171   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.307126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:32.386190   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:34.886414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.491383   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.492155   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.974978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:38.473506   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.307221   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.806174   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.386235   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.886579   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.990447   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.991649   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.474117   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.973778   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.308130   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.806411   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.807765   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.385199   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.387102   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.491019   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.993076   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.974689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.473863   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.305509   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.305825   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:46.885280   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.385189   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.491661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.989457   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.991512   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.973709   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.976112   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.306459   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.805441   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.386498   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.887424   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.492074   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.989668   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.473073   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.473689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.474597   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:55.806711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.305434   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.386640   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.885298   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.995348   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:01.491262   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.974371   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.474367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.305803   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.806120   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:04.807184   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.886357   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.887274   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:05.386976   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.708637   45954 pod_ready.go:81] duration metric: took 4m0.000105295s waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:03.708672   45954 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:03.708681   45954 pod_ready.go:38] duration metric: took 4m6.567418041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:03.708699   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:51:03.708739   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:03.708804   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:03.759664   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:03.759688   45954 cri.go:89] found id: ""
	I0914 22:51:03.759697   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:03.759753   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.764736   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:03.764789   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:03.800251   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:03.800280   45954 cri.go:89] found id: ""
	I0914 22:51:03.800290   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:03.800341   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.804761   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:03.804818   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:03.847136   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:03.847162   45954 cri.go:89] found id: ""
	I0914 22:51:03.847172   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:03.847215   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.851253   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:03.851325   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:03.882629   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:03.882654   45954 cri.go:89] found id: ""
	I0914 22:51:03.882664   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:03.882713   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.887586   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:03.887642   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:03.916702   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:03.916723   45954 cri.go:89] found id: ""
	I0914 22:51:03.916730   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:03.916773   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.921172   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:03.921232   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:03.950593   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:03.950618   45954 cri.go:89] found id: ""
	I0914 22:51:03.950628   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:03.950689   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.954303   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:03.954366   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:03.982565   45954 cri.go:89] found id: ""
	I0914 22:51:03.982588   45954 logs.go:284] 0 containers: []
	W0914 22:51:03.982597   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:03.982604   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:03.982662   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:04.011932   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.011957   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:04.011964   45954 cri.go:89] found id: ""
	I0914 22:51:04.011972   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:04.012026   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.016091   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.019830   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:04.019852   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:04.061469   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:04.061494   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:04.092823   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:04.092846   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:04.156150   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:04.156190   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:04.169879   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:04.169920   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:04.226165   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:04.226198   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.255658   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:04.255692   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:04.299368   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:04.299401   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:04.440433   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:04.440467   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:04.477396   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:04.477425   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:04.513399   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:04.513431   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:05.016889   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:05.016925   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:05.067712   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:05.067749   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:05.973423   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.973637   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.307754   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.805419   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.389465   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.885150   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.597529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:51:07.614053   45954 api_server.go:72] duration metric: took 4m15.435815174s to wait for apiserver process to appear ...
	I0914 22:51:07.614076   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:51:07.614106   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:07.614155   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:07.643309   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:07.643333   45954 cri.go:89] found id: ""
	I0914 22:51:07.643342   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:07.643411   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.647434   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:07.647511   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:07.676943   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:07.676959   45954 cri.go:89] found id: ""
	I0914 22:51:07.676966   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:07.677006   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.681053   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:07.681101   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:07.714710   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:07.714736   45954 cri.go:89] found id: ""
	I0914 22:51:07.714745   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:07.714807   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.718900   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:07.718966   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:07.754786   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:07.754808   45954 cri.go:89] found id: ""
	I0914 22:51:07.754815   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:07.754867   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.759623   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:07.759693   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:07.794366   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:07.794389   45954 cri.go:89] found id: ""
	I0914 22:51:07.794398   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:07.794457   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.798717   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:07.798777   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:07.831131   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:07.831158   45954 cri.go:89] found id: ""
	I0914 22:51:07.831167   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:07.831227   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.835696   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:07.835762   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:07.865802   45954 cri.go:89] found id: ""
	I0914 22:51:07.865831   45954 logs.go:284] 0 containers: []
	W0914 22:51:07.865841   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:07.865849   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:07.865905   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:07.895025   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:07.895049   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:07.895056   45954 cri.go:89] found id: ""
	I0914 22:51:07.895064   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:07.895118   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.899230   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.903731   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:07.903751   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:08.033922   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:08.033952   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:08.068784   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:08.068812   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:08.120395   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:08.120428   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:08.133740   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:08.133763   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:08.173288   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:08.173320   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:08.203964   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:08.203988   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:08.732327   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:08.732367   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:08.784110   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:08.784150   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:08.819179   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:08.819210   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:08.866612   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:08.866644   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:08.900892   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:08.900939   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:08.950563   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:08.950593   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:11.505428   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:51:11.511228   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:51:11.512855   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:51:11.512881   45954 api_server.go:131] duration metric: took 3.898798182s to wait for apiserver health ...
	I0914 22:51:11.512891   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:51:11.512911   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:11.512954   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:11.544538   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:11.544563   45954 cri.go:89] found id: ""
	I0914 22:51:11.544573   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:11.544629   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.548878   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:11.548946   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:11.578439   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:11.578464   45954 cri.go:89] found id: ""
	I0914 22:51:11.578473   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:11.578531   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.582515   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:11.582576   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:11.611837   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:11.611857   45954 cri.go:89] found id: ""
	I0914 22:51:11.611863   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:11.611917   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.615685   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:11.615744   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:11.645850   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:11.645869   45954 cri.go:89] found id: ""
	I0914 22:51:11.645876   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:11.645914   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.649995   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:11.650048   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:11.683515   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:11.683541   45954 cri.go:89] found id: ""
	I0914 22:51:11.683550   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:11.683604   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.687715   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:11.687806   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:11.721411   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.721428   45954 cri.go:89] found id: ""
	I0914 22:51:11.721434   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:11.721477   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.725801   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:11.725859   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:11.760391   45954 cri.go:89] found id: ""
	I0914 22:51:11.760417   45954 logs.go:284] 0 containers: []
	W0914 22:51:11.760427   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:11.760437   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:11.760498   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:11.792140   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.792162   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:11.792168   45954 cri.go:89] found id: ""
	I0914 22:51:11.792175   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:11.792234   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.796600   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.800888   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:11.800912   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:11.863075   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:11.863106   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:11.877744   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:11.877775   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.930381   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:11.930418   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.961471   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:11.961497   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:12.005391   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:12.005417   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:12.034742   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:12.034771   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:12.064672   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:12.064702   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:12.095801   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:12.095834   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:12.124224   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:12.124260   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:09.974433   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.975389   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.806380   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.807443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:12.657331   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:12.657375   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:12.718197   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:12.718227   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:12.845353   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:12.845381   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:15.395502   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:51:15.395524   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.395529   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.395534   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.395540   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.395544   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.395548   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.395554   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.395559   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.395565   45954 system_pods.go:74] duration metric: took 3.882669085s to wait for pod list to return data ...
	I0914 22:51:15.395572   45954 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:51:15.398128   45954 default_sa.go:45] found service account: "default"
	I0914 22:51:15.398148   45954 default_sa.go:55] duration metric: took 2.571314ms for default service account to be created ...
	I0914 22:51:15.398155   45954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:51:15.407495   45954 system_pods.go:86] 8 kube-system pods found
	I0914 22:51:15.407517   45954 system_pods.go:89] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.407522   45954 system_pods.go:89] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.407527   45954 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.407532   45954 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.407535   45954 system_pods.go:89] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.407540   45954 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.407549   45954 system_pods.go:89] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.407558   45954 system_pods.go:89] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.407576   45954 system_pods.go:126] duration metric: took 9.409452ms to wait for k8s-apps to be running ...
	I0914 22:51:15.407587   45954 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:51:15.407633   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:15.424728   45954 system_svc.go:56] duration metric: took 17.122868ms WaitForService to wait for kubelet.
	I0914 22:51:15.424754   45954 kubeadm.go:581] duration metric: took 4m23.246518879s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:51:15.424794   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:51:15.428492   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:51:15.428520   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:51:15.428534   45954 node_conditions.go:105] duration metric: took 3.733977ms to run NodePressure ...
	I0914 22:51:15.428550   45954 start.go:228] waiting for startup goroutines ...
	I0914 22:51:15.428563   45954 start.go:233] waiting for cluster config update ...
	I0914 22:51:15.428576   45954 start.go:242] writing updated cluster config ...
	I0914 22:51:15.428887   45954 ssh_runner.go:195] Run: rm -f paused
	I0914 22:51:15.479711   45954 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:51:15.482387   45954 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799144" cluster and "default" namespace by default
	I0914 22:51:11.885968   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.887391   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:14.474188   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.974056   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.306146   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.806037   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.386306   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.386406   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:19.474446   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:21.474860   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.375841   46412 pod_ready.go:81] duration metric: took 4m0.000552226s waiting for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:22.375872   46412 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:22.375890   46412 pod_ready.go:38] duration metric: took 4m12.961510371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:22.375915   46412 kubeadm.go:640] restartCluster took 4m33.075347594s
	W0914 22:51:22.375983   46412 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:51:22.376022   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:51:20.806249   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.807141   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:24.809235   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:20.888098   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:23.386482   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:25.386542   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.305114   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:29.306240   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.886695   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:30.385740   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:31.306508   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:33.306655   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:32.886111   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.384925   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.805992   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:38.307801   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:37.385344   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:39.888303   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:40.806212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:43.305815   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:42.388414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:44.388718   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:45.306197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:47.806983   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:49.807150   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:46.885737   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:48.885794   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.115476   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.73941793s)
	I0914 22:51:53.115549   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:53.128821   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:51:53.137267   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:51:53.145533   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:51:53.145569   46412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 22:51:53.202279   46412 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 22:51:53.202501   46412 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:51:53.353512   46412 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:51:53.353674   46412 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:51:53.353816   46412 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:51:53.513428   46412 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:51:53.515450   46412 out.go:204]   - Generating certificates and keys ...
	I0914 22:51:53.515574   46412 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:51:53.515672   46412 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:51:53.515785   46412 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:51:53.515896   46412 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:51:53.516234   46412 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:51:53.516841   46412 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:51:53.517488   46412 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:51:53.517974   46412 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:51:53.518563   46412 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:51:53.519109   46412 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:51:53.519728   46412 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:51:53.519809   46412 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:51:53.641517   46412 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:51:53.842920   46412 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:51:53.982500   46412 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:51:54.065181   46412 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:51:54.065678   46412 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:51:54.071437   46412 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:51:52.305643   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.305996   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:51.386246   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.386956   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.073206   46412 out.go:204]   - Booting up control plane ...
	I0914 22:51:54.073363   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:51:54.073470   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:51:54.073554   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:51:54.091178   46412 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:51:54.091289   46412 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:51:54.091371   46412 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 22:51:54.221867   46412 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:51:56.306473   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:58.306953   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:55.886624   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:57.887222   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:00.385756   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.225144   46412 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002879 seconds
	I0914 22:52:02.225314   46412 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:02.244705   46412 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:02.778808   46412 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:02.779047   46412 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-588699 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 22:52:03.296381   46412 kubeadm.go:322] [bootstrap-token] Using token: x2l9oo.p0a5g5jx49srzji3
	I0914 22:52:03.297976   46412 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:03.298091   46412 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:03.308475   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:52:03.319954   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:03.325968   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:03.330375   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:03.334686   46412 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:03.353185   46412 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:52:03.622326   46412 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:03.721359   46412 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:03.721385   46412 kubeadm.go:322] 
	I0914 22:52:03.721472   46412 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:03.721486   46412 kubeadm.go:322] 
	I0914 22:52:03.721589   46412 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:03.721602   46412 kubeadm.go:322] 
	I0914 22:52:03.721623   46412 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:03.721678   46412 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:03.721727   46412 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:03.721764   46412 kubeadm.go:322] 
	I0914 22:52:03.721856   46412 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 22:52:03.721867   46412 kubeadm.go:322] 
	I0914 22:52:03.721945   46412 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 22:52:03.721954   46412 kubeadm.go:322] 
	I0914 22:52:03.722029   46412 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:03.722137   46412 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:03.722240   46412 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:03.722250   46412 kubeadm.go:322] 
	I0914 22:52:03.722367   46412 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:52:03.722468   46412 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:03.722479   46412 kubeadm.go:322] 
	I0914 22:52:03.722583   46412 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.722694   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:03.722719   46412 kubeadm.go:322] 	--control-plane 
	I0914 22:52:03.722752   46412 kubeadm.go:322] 
	I0914 22:52:03.722887   46412 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:03.722912   46412 kubeadm.go:322] 
	I0914 22:52:03.723031   46412 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.723170   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:03.724837   46412 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:03.724867   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:52:03.724879   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:03.726645   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:03.728115   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:03.741055   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:03.811746   46412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:03.811823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=embed-certs-588699 minikube.k8s.io/updated_at=2023_09_14T22_52_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:03.811827   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:00.805633   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.805831   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.807503   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.885499   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.886940   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.097721   46412 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:04.097763   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.186240   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.773886   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.273494   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.773993   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.274084   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.773309   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.273666   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.773916   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.274226   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.774073   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.807538   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.306062   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:06.886980   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.385212   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.274041   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:09.773409   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.274272   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.774321   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.274268   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.774250   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.273823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.774000   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.273596   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.774284   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.806015   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:14.308997   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:11.386087   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:12.580003   46713 pod_ready.go:81] duration metric: took 4m0.001053291s waiting for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:12.580035   46713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:12.580062   46713 pod_ready.go:38] duration metric: took 4m1.199260232s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:12.580089   46713 kubeadm.go:640] restartCluster took 4m59.591702038s
	W0914 22:52:12.580145   46713 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:52:12.580169   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:52:14.274174   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:14.773472   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.273376   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.773286   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.273920   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.773334   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.926033   46412 kubeadm.go:1081] duration metric: took 13.114277677s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:16.926076   46412 kubeadm.go:406] StartCluster complete in 5m27.664586228s
	I0914 22:52:16.926099   46412 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.926229   46412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:16.928891   46412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.929177   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:16.929313   46412 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:16.929393   46412 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-588699"
	I0914 22:52:16.929408   46412 addons.go:69] Setting default-storageclass=true in profile "embed-certs-588699"
	I0914 22:52:16.929423   46412 addons.go:69] Setting metrics-server=true in profile "embed-certs-588699"
	I0914 22:52:16.929435   46412 addons.go:231] Setting addon metrics-server=true in "embed-certs-588699"
	W0914 22:52:16.929446   46412 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:16.929446   46412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-588699"
	I0914 22:52:16.929475   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:52:16.929508   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929418   46412 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-588699"
	W0914 22:52:16.929533   46412 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:16.929574   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929907   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929938   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929939   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929963   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929968   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929995   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.948975   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41151
	I0914 22:52:16.948990   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0914 22:52:16.948977   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0914 22:52:16.949953   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950006   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.949957   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950601   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950607   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950620   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950626   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950632   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950647   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.951159   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951191   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951410   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951808   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951829   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.951896   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951906   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.951911   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.961182   46412 addons.go:231] Setting addon default-storageclass=true in "embed-certs-588699"
	W0914 22:52:16.961207   46412 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:16.961236   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.961615   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.961637   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.976517   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0914 22:52:16.976730   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0914 22:52:16.977005   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977161   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977448   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977466   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977564   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977589   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977781   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977913   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977966   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.978108   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.980084   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.980429   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.982113   46412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:16.983227   46412 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:16.984383   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:16.984394   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:16.984407   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.983307   46412 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:16.984439   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:16.984455   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.987850   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0914 22:52:16.987989   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988270   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.988771   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.988788   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.988849   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.988867   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988894   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.989058   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.989528   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.989748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.990151   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.990172   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.990441   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:16.990597   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.990766   46412 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-588699" context rescaled to 1 replicas
	I0914 22:52:16.990794   46412 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:16.992351   46412 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:16.990986   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.991129   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.994003   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:16.994015   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.994097   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.994300   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.994607   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.007652   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0914 22:52:17.008127   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:17.008676   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:17.008699   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:17.009115   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:17.009301   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:17.010905   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:17.011169   46412 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.011183   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:17.011201   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:17.014427   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.014837   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:17.014865   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.015132   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:17.015299   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:17.015435   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:17.015585   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.124720   46412 node_ready.go:35] waiting up to 6m0s for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.124831   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:17.128186   46412 node_ready.go:49] node "embed-certs-588699" has status "Ready":"True"
	I0914 22:52:17.128211   46412 node_ready.go:38] duration metric: took 3.459847ms waiting for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.128221   46412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.133021   46412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138574   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.138594   46412 pod_ready.go:81] duration metric: took 5.550933ms waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138605   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151548   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.151569   46412 pod_ready.go:81] duration metric: took 12.956129ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151581   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169368   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.169393   46412 pod_ready.go:81] duration metric: took 17.803681ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169406   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.180202   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:17.180227   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:17.184052   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:17.227381   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:17.227411   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:17.233773   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.293762   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:17.293788   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:17.328911   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.328934   46412 pod_ready.go:81] duration metric: took 159.520585ms waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.328942   46412 pod_ready.go:38] duration metric: took 200.709608ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.328958   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:17.329008   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:17.379085   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:18.947663   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.822786746s)
	I0914 22:52:18.947705   46412 start.go:917] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:19.171809   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.937996858s)
	I0914 22:52:19.171861   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171872   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.98779094s)
	I0914 22:52:19.171908   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171927   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171878   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171875   46412 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.842825442s)
	I0914 22:52:19.172234   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172277   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172292   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172289   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172307   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172322   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172352   46412 api_server.go:72] duration metric: took 2.181532709s to wait for apiserver process to appear ...
	I0914 22:52:19.172322   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172369   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.172377   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172387   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172396   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172410   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:52:19.172625   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172643   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172657   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172667   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172688   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172715   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172723   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172955   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172969   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.173012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.205041   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:52:19.209533   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:19.209561   46412 api_server.go:131] duration metric: took 37.185195ms to wait for apiserver health ...
	I0914 22:52:19.209573   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:19.225866   46412 system_pods.go:59] 7 kube-system pods found
	I0914 22:52:19.225893   46412 system_pods.go:61] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.225900   46412 system_pods.go:61] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.225908   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.225915   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.225921   46412 system_pods.go:61] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.225928   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.225934   46412 system_pods.go:61] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending
	I0914 22:52:19.225947   46412 system_pods.go:74] duration metric: took 16.366454ms to wait for pod list to return data ...
	I0914 22:52:19.225958   46412 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:19.232176   46412 default_sa.go:45] found service account: "default"
	I0914 22:52:19.232202   46412 default_sa.go:55] duration metric: took 6.234795ms for default service account to be created ...
	I0914 22:52:19.232221   46412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:19.238383   46412 system_pods.go:86] 7 kube-system pods found
	I0914 22:52:19.238415   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.238426   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.238433   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.238442   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.238448   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.238454   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.238463   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.238486   46412 retry.go:31] will retry after 271.864835ms: missing components: kube-dns
	I0914 22:52:19.431792   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.052667289s)
	I0914 22:52:19.431858   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.431875   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432217   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432254   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432265   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432277   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.432291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432561   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432615   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432626   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432637   46412 addons.go:467] Verifying addon metrics-server=true in "embed-certs-588699"
	I0914 22:52:19.434406   46412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:15.499654   45407 pod_ready.go:81] duration metric: took 4m0.00095032s waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:15.499683   45407 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:15.499692   45407 pod_ready.go:38] duration metric: took 4m4.80145633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:15.499709   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:15.499741   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:15.499821   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:15.551531   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:15.551573   45407 cri.go:89] found id: ""
	I0914 22:52:15.551584   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:15.551638   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.555602   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:15.555649   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:15.583476   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:15.583497   45407 cri.go:89] found id: ""
	I0914 22:52:15.583504   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:15.583541   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.587434   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:15.587499   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:15.614791   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:15.614813   45407 cri.go:89] found id: ""
	I0914 22:52:15.614821   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:15.614865   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.618758   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:15.618813   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:15.651772   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:15.651798   45407 cri.go:89] found id: ""
	I0914 22:52:15.651807   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:15.651862   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.656464   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:15.656533   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:15.701258   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:15.701289   45407 cri.go:89] found id: ""
	I0914 22:52:15.701299   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:15.701359   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.705980   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:15.706049   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:15.741616   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:15.741640   45407 cri.go:89] found id: ""
	I0914 22:52:15.741647   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:15.741702   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.745863   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:15.745913   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:15.779362   45407 cri.go:89] found id: ""
	I0914 22:52:15.779385   45407 logs.go:284] 0 containers: []
	W0914 22:52:15.779395   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:15.779403   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:15.779462   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:15.815662   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:15.815691   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.815698   45407 cri.go:89] found id: ""
	I0914 22:52:15.815707   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:15.815781   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.820879   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.826312   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:15.826338   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.864143   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:15.864175   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:16.401646   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:16.401689   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:16.442964   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:16.443000   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:16.612411   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:16.612444   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:16.664620   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:16.664652   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:16.702405   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:16.702432   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:16.738583   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:16.738615   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:16.752752   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:16.752788   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:16.793883   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:16.793924   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:16.825504   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:16.825531   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:16.879008   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:16.879046   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:16.910902   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:16.910941   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.477726   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:19.494214   45407 api_server.go:72] duration metric: took 4m15.689238s to wait for apiserver process to appear ...
	I0914 22:52:19.494240   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.494281   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:19.494341   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:19.534990   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:19.535014   45407 cri.go:89] found id: ""
	I0914 22:52:19.535023   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:19.535081   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.540782   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:19.540850   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:19.570364   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:19.570390   45407 cri.go:89] found id: ""
	I0914 22:52:19.570399   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:19.570465   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.575964   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:19.576027   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:19.608023   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:19.608047   45407 cri.go:89] found id: ""
	I0914 22:52:19.608056   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:19.608098   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.612290   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:19.612343   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:19.644658   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:19.644682   45407 cri.go:89] found id: ""
	I0914 22:52:19.644692   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:19.644743   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.651016   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:19.651092   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:19.693035   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:19.693059   45407 cri.go:89] found id: ""
	I0914 22:52:19.693068   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:19.693122   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.697798   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:19.697864   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:19.733805   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.733828   45407 cri.go:89] found id: ""
	I0914 22:52:19.733837   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:19.733890   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.737902   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:19.737976   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:19.765139   45407 cri.go:89] found id: ""
	I0914 22:52:19.765169   45407 logs.go:284] 0 containers: []
	W0914 22:52:19.765180   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:19.765188   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:19.765248   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:19.793734   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.793756   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:19.793761   45407 cri.go:89] found id: ""
	I0914 22:52:19.793767   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:19.793807   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.797559   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.801472   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:19.801492   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:19.937110   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:19.937138   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.987564   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:19.987599   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.436138   46412 addons.go:502] enable addons completed in 2.506819532s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:19.523044   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.523077   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.523089   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.523096   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.523103   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.523109   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.523115   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.523124   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.523137   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.523164   46412 retry.go:31] will retry after 369.359833ms: missing components: kube-dns
	I0914 22:52:19.900488   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.900529   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.900541   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.900550   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.900558   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.900564   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.900571   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.900587   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.900608   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.900630   46412 retry.go:31] will retry after 329.450987ms: missing components: kube-dns
	I0914 22:52:20.245124   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.245152   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.245160   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.245166   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.245171   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.245177   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.245185   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.245194   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.245204   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.245225   46412 retry.go:31] will retry after 392.738624ms: missing components: kube-dns
	I0914 22:52:20.645671   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.645706   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.645716   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.645725   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.645737   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.645747   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.645756   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.645770   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.645783   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.645803   46412 retry.go:31] will retry after 463.608084ms: missing components: kube-dns
	I0914 22:52:21.118889   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:21.118920   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Running
	I0914 22:52:21.118926   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:21.118931   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:21.118937   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:21.118941   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:21.118946   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:21.118954   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:21.118963   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:21.118971   46412 system_pods.go:126] duration metric: took 1.886741356s to wait for k8s-apps to be running ...
	I0914 22:52:21.118984   46412 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:21.119025   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:21.134331   46412 system_svc.go:56] duration metric: took 15.34035ms WaitForService to wait for kubelet.
	I0914 22:52:21.134358   46412 kubeadm.go:581] duration metric: took 4.143541631s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:21.134381   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:21.137182   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:21.137207   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:21.137230   46412 node_conditions.go:105] duration metric: took 2.834168ms to run NodePressure ...
	I0914 22:52:21.137243   46412 start.go:228] waiting for startup goroutines ...
	I0914 22:52:21.137252   46412 start.go:233] waiting for cluster config update ...
	I0914 22:52:21.137272   46412 start.go:242] writing updated cluster config ...
	I0914 22:52:21.137621   46412 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:21.184252   46412 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:21.186251   46412 out.go:177] * Done! kubectl is now configured to use "embed-certs-588699" cluster and "default" namespace by default
	I0914 22:52:20.022483   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:20.022512   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:20.062375   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:20.062403   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:20.099744   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:20.099776   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:20.129490   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:20.129515   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:20.165896   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:20.165922   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:20.692724   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:20.692758   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:20.761038   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:20.761086   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:20.777087   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:20.777114   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:20.808980   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:20.809020   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:20.845904   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:20.845942   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.393816   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:52:23.399946   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:52:23.401251   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:23.401271   45407 api_server.go:131] duration metric: took 3.907024801s to wait for apiserver health ...
	I0914 22:52:23.401279   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:23.401303   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:23.401346   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:23.433871   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.433895   45407 cri.go:89] found id: ""
	I0914 22:52:23.433905   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:23.433962   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.438254   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:23.438317   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:23.468532   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:23.468555   45407 cri.go:89] found id: ""
	I0914 22:52:23.468564   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:23.468626   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.473599   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:23.473658   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:23.509951   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:23.509976   45407 cri.go:89] found id: ""
	I0914 22:52:23.509986   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:23.510041   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.516637   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:23.516722   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:23.549562   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.549587   45407 cri.go:89] found id: ""
	I0914 22:52:23.549596   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:23.549653   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.553563   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:23.553626   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:23.584728   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:23.584749   45407 cri.go:89] found id: ""
	I0914 22:52:23.584756   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:23.584797   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.588600   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:23.588653   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:23.616590   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.616609   45407 cri.go:89] found id: ""
	I0914 22:52:23.616617   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:23.616669   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.620730   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:23.620782   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:23.648741   45407 cri.go:89] found id: ""
	I0914 22:52:23.648765   45407 logs.go:284] 0 containers: []
	W0914 22:52:23.648773   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:23.648781   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:23.648831   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:23.680814   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:23.680839   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:23.680846   45407 cri.go:89] found id: ""
	I0914 22:52:23.680854   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:23.680914   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.685954   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.690428   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:23.690459   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:23.818421   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:23.818456   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.867863   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:23.867894   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.903362   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:23.903393   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:23.943793   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:23.943820   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:24.538337   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:24.538390   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:24.585031   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:24.585072   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:24.639086   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:24.639120   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:24.650905   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:24.650925   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:24.698547   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:24.698590   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:24.745590   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:24.745619   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:24.777667   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:24.777697   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:24.811536   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:24.811565   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:25.132299   46713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (12.552094274s)
	I0914 22:52:25.132371   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:25.146754   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:52:25.155324   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:52:25.164387   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:52:25.164429   46713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0914 22:52:25.227970   46713 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0914 22:52:25.228029   46713 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:52:25.376482   46713 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:52:25.376603   46713 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:52:25.376721   46713 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:52:25.536163   46713 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:52:25.536339   46713 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:52:25.543555   46713 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0914 22:52:25.663579   46713 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:52:25.665315   46713 out.go:204]   - Generating certificates and keys ...
	I0914 22:52:25.665428   46713 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:52:25.665514   46713 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:52:25.665610   46713 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:52:25.665688   46713 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:52:25.665777   46713 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:52:25.665844   46713 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:52:25.665925   46713 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:52:25.666002   46713 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:52:25.666095   46713 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:52:25.666223   46713 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:52:25.666277   46713 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:52:25.666352   46713 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:52:25.931689   46713 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:52:26.088693   46713 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:52:26.251867   46713 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:52:26.566157   46713 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:52:26.567520   46713 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:52:27.360740   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:52:27.360780   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.360788   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.360795   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.360802   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.360809   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.360816   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.360827   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.360841   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.360848   45407 system_pods.go:74] duration metric: took 3.959563404s to wait for pod list to return data ...
	I0914 22:52:27.360859   45407 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:27.363690   45407 default_sa.go:45] found service account: "default"
	I0914 22:52:27.363715   45407 default_sa.go:55] duration metric: took 2.849311ms for default service account to be created ...
	I0914 22:52:27.363724   45407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:27.372219   45407 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:27.372520   45407 system_pods.go:89] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.372552   45407 system_pods.go:89] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.372571   45407 system_pods.go:89] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.372590   45407 system_pods.go:89] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.372602   45407 system_pods.go:89] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.372616   45407 system_pods.go:89] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.372744   45407 system_pods.go:89] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.372835   45407 system_pods.go:89] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.372845   45407 system_pods.go:126] duration metric: took 9.100505ms to wait for k8s-apps to be running ...
	I0914 22:52:27.372854   45407 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:27.373084   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:27.390112   45407 system_svc.go:56] duration metric: took 17.249761ms WaitForService to wait for kubelet.
	I0914 22:52:27.390137   45407 kubeadm.go:581] duration metric: took 4m23.585167656s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:27.390174   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:27.393099   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:27.393123   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:27.393133   45407 node_conditions.go:105] duration metric: took 2.953927ms to run NodePressure ...
	I0914 22:52:27.393142   45407 start.go:228] waiting for startup goroutines ...
	I0914 22:52:27.393148   45407 start.go:233] waiting for cluster config update ...
	I0914 22:52:27.393156   45407 start.go:242] writing updated cluster config ...
	I0914 22:52:27.393379   45407 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:27.441228   45407 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:27.442889   45407 out.go:177] * Done! kubectl is now configured to use "no-preload-344363" cluster and "default" namespace by default
	I0914 22:52:26.569354   46713 out.go:204]   - Booting up control plane ...
	I0914 22:52:26.569484   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:52:26.582407   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:52:26.589858   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:52:26.591607   46713 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:52:26.596764   46713 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:52:37.101083   46713 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503887 seconds
	I0914 22:52:37.101244   46713 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:37.116094   46713 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:37.633994   46713 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:37.634186   46713 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-930717 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 22:52:38.144071   46713 kubeadm.go:322] [bootstrap-token] Using token: jnf2g9.h0rslaob8wj902ym
	I0914 22:52:38.145543   46713 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:38.145661   46713 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:38.153514   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:38.159575   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:38.164167   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:38.167903   46713 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:38.241317   46713 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:38.572283   46713 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:38.572309   46713 kubeadm.go:322] 
	I0914 22:52:38.572399   46713 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:38.572410   46713 kubeadm.go:322] 
	I0914 22:52:38.572526   46713 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:38.572547   46713 kubeadm.go:322] 
	I0914 22:52:38.572581   46713 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:38.572669   46713 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:38.572762   46713 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:38.572775   46713 kubeadm.go:322] 
	I0914 22:52:38.572836   46713 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:38.572926   46713 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:38.573012   46713 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:38.573020   46713 kubeadm.go:322] 
	I0914 22:52:38.573089   46713 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0914 22:52:38.573152   46713 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:38.573159   46713 kubeadm.go:322] 
	I0914 22:52:38.573222   46713 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573313   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:38.573336   46713 kubeadm.go:322]     --control-plane 	  
	I0914 22:52:38.573343   46713 kubeadm.go:322] 
	I0914 22:52:38.573406   46713 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:38.573414   46713 kubeadm.go:322] 
	I0914 22:52:38.573527   46713 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573687   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:38.574219   46713 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:38.574248   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:52:38.574261   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:38.575900   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:38.577300   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:38.587120   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:38.610197   46713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:38.610265   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.610267   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=old-k8s-version-930717 minikube.k8s.io/updated_at=2023_09_14T22_52_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.858082   46713 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:38.858297   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.960045   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:39.549581   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.049788   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.549998   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.049043   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.549875   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.049596   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.549039   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.049563   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.549663   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.049534   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.549938   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.049227   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.549171   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.049628   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.550019   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.049857   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.549272   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.049648   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.549709   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.049770   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.550050   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.048948   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.549154   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.049695   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.549811   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.049813   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.549858   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.049505   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.149056   46713 kubeadm.go:1081] duration metric: took 14.538858246s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:53.149093   46713 kubeadm.go:406] StartCluster complete in 5m40.2118148s
	I0914 22:52:53.149114   46713 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.149200   46713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:53.150928   46713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.151157   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:53.151287   46713 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:53.151382   46713 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151391   46713 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151405   46713 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-930717"
	I0914 22:52:53.151411   46713 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-930717"
	W0914 22:52:53.151413   46713 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:53.151419   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:52:53.151423   46713 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-930717"
	W0914 22:52:53.151433   46713 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:53.151479   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151412   46713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-930717"
	I0914 22:52:53.151484   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151796   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151820   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151958   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.152044   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.170764   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0914 22:52:53.170912   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0914 22:52:53.171012   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0914 22:52:53.171235   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171345   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171378   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171850   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171870   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171970   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171991   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171999   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.172019   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.172232   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172517   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172572   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172759   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.172910   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.172987   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.173110   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.173146   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.189453   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0914 22:52:53.189789   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.190229   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.190251   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.190646   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.190822   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.192990   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.195176   46713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:53.194738   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0914 22:52:53.196779   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:53.196797   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:53.196813   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.195752   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.197457   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.197476   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.197849   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.198026   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.200022   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.200176   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.201917   46713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:53.200654   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.200795   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.203540   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.203632   46713 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.203652   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.203844   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.204002   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.206460   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.206968   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.206998   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.207153   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.207303   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.207524   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.207672   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.253944   46713 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-930717"
	W0914 22:52:53.253968   46713 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:53.253990   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.254330   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.254377   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0914 22:52:53.270047   46713 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-930717" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0914 22:52:53.270077   46713 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0914 22:52:53.270099   46713 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:53.271730   46713 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:53.270422   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0914 22:52:53.273255   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:53.273653   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.274180   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.274206   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.274559   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.275121   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.275165   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.291000   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0914 22:52:53.291405   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.291906   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.291927   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.292312   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.292529   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.294366   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.294583   46713 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.294598   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:53.294611   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.297265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.297809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297895   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.298057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.298236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.298383   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.344235   46713 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.344478   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:53.350176   46713 node_ready.go:49] node "old-k8s-version-930717" has status "Ready":"True"
	I0914 22:52:53.350196   46713 node_ready.go:38] duration metric: took 5.934445ms waiting for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.350204   46713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:53.359263   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:53.359296   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:53.367792   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:53.384576   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.397687   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:53.397703   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:53.439813   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:53.439843   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:53.473431   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.499877   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:54.233171   46713 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:54.365130   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365156   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365178   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365198   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365438   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365465   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365476   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365481   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.365486   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365546   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365556   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365565   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365574   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367064   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367090   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367068   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367489   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367513   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367526   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.367540   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367489   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367757   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367810   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367852   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.830646   46713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330728839s)
	I0914 22:52:54.830698   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.830711   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831036   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831059   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831065   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.831080   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.831096   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831312   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831328   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831338   46713 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-930717"
	I0914 22:52:54.832992   46713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:54.834828   46713 addons.go:502] enable addons completed in 1.683549699s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:55.415046   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:57.878279   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:59.879299   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:01.879559   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:03.880088   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:05.880334   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.880355   46713 pod_ready.go:81] duration metric: took 12.512536425s waiting for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.880364   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885370   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.885386   46713 pod_ready.go:81] duration metric: took 5.016722ms waiting for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885394   46713 pod_ready.go:38] duration metric: took 12.535181673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:53:05.885413   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:53:05.885466   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:53:05.901504   46713 api_server.go:72] duration metric: took 12.631380008s to wait for apiserver process to appear ...
	I0914 22:53:05.901522   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:53:05.901534   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:53:05.907706   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:53:05.908445   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:53:05.908466   46713 api_server.go:131] duration metric: took 6.937898ms to wait for apiserver health ...
	I0914 22:53:05.908475   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:53:05.911983   46713 system_pods.go:59] 5 kube-system pods found
	I0914 22:53:05.912001   46713 system_pods.go:61] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.912008   46713 system_pods.go:61] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.912013   46713 system_pods.go:61] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.912022   46713 system_pods.go:61] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.912033   46713 system_pods.go:61] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.912043   46713 system_pods.go:74] duration metric: took 3.562804ms to wait for pod list to return data ...
	I0914 22:53:05.912054   46713 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:53:05.914248   46713 default_sa.go:45] found service account: "default"
	I0914 22:53:05.914267   46713 default_sa.go:55] duration metric: took 2.203622ms for default service account to be created ...
	I0914 22:53:05.914276   46713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:53:05.917292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:05.917310   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.917315   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.917319   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.917325   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.917331   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.917343   46713 retry.go:31] will retry after 277.910308ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.201147   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.201170   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.201175   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.201179   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.201185   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.201191   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.201205   46713 retry.go:31] will retry after 262.96693ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.470372   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.470410   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.470418   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.470425   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.470435   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.470446   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.470481   46713 retry.go:31] will retry after 486.428451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.961666   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.961693   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.961700   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.961706   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.961716   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.961724   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.961740   46713 retry.go:31] will retry after 524.467148ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:07.491292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:07.491315   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:07.491321   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:07.491325   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:07.491331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:07.491337   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:07.491370   46713 retry.go:31] will retry after 567.308028ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.063587   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.063612   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.063618   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.063622   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.063629   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.063635   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.063649   46713 retry.go:31] will retry after 723.150919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.791530   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.791561   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.791571   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.791578   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.791588   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.791597   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.791616   46713 retry.go:31] will retry after 1.173741151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:09.971866   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:09.971895   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:09.971903   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:09.971909   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:09.971919   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:09.971928   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:09.971946   46713 retry.go:31] will retry after 1.046713916s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:11.024191   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:11.024220   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:11.024226   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:11.024231   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:11.024238   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:11.024244   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:11.024260   46713 retry.go:31] will retry after 1.531910243s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:12.562517   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:12.562555   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:12.562564   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:12.562573   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:12.562584   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:12.562594   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:12.562612   46713 retry.go:31] will retry after 2.000243773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:14.570247   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:14.570284   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:14.570294   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:14.570303   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:14.570320   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:14.570329   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:14.570346   46713 retry.go:31] will retry after 2.095330784s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:16.670345   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:16.670372   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:16.670377   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:16.670382   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:16.670394   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:16.670401   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:16.670416   46713 retry.go:31] will retry after 2.811644755s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:19.488311   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:19.488339   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:19.488344   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:19.488348   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:19.488354   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:19.488362   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:19.488380   46713 retry.go:31] will retry after 3.274452692s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:22.768417   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:22.768446   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:22.768454   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:22.768461   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:22.768471   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:22.768481   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:22.768499   46713 retry.go:31] will retry after 5.52037196s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:28.294932   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:28.294958   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:28.294964   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:28.294967   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:28.294975   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:28.294980   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:28.294994   46713 retry.go:31] will retry after 4.305647383s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:32.605867   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:32.605894   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:32.605900   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:32.605903   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:32.605910   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:32.605915   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:32.605929   46713 retry.go:31] will retry after 8.214918081s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:40.825284   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:40.825314   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:40.825319   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:40.825324   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:40.825331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:40.825336   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:40.825352   46713 retry.go:31] will retry after 10.5220598s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:51.353809   46713 system_pods.go:86] 7 kube-system pods found
	I0914 22:53:51.353844   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:51.353851   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:51.353856   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Pending
	I0914 22:53:51.353862   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:51.353868   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Pending
	I0914 22:53:51.353878   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:51.353887   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:51.353907   46713 retry.go:31] will retry after 10.482387504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:54:01.842876   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:01.842900   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:01.842905   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:01.842909   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Pending
	I0914 22:54:01.842914   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:01.842918   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Pending
	I0914 22:54:01.842921   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:01.842925   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:01.842931   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:01.842937   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:01.842950   46713 retry.go:31] will retry after 14.535469931s: missing components: etcd, kube-controller-manager
	I0914 22:54:16.384703   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:16.384732   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:16.384738   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:16.384742   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Running
	I0914 22:54:16.384747   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:16.384751   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Running
	I0914 22:54:16.384754   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:16.384758   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:16.384766   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:16.384773   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:16.384782   46713 system_pods.go:126] duration metric: took 1m10.470499333s to wait for k8s-apps to be running ...
	I0914 22:54:16.384791   46713 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:54:16.384849   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:54:16.409329   46713 system_svc.go:56] duration metric: took 24.530447ms WaitForService to wait for kubelet.
	I0914 22:54:16.409359   46713 kubeadm.go:581] duration metric: took 1m23.139238057s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:54:16.409385   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:54:16.412461   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:54:16.412490   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:54:16.412505   46713 node_conditions.go:105] duration metric: took 3.107771ms to run NodePressure ...
	I0914 22:54:16.412519   46713 start.go:228] waiting for startup goroutines ...
	I0914 22:54:16.412529   46713 start.go:233] waiting for cluster config update ...
	I0914 22:54:16.412546   46713 start.go:242] writing updated cluster config ...
	I0914 22:54:16.412870   46713 ssh_runner.go:195] Run: rm -f paused
	I0914 22:54:16.460181   46713 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I0914 22:54:16.461844   46713 out.go:177] 
	W0914 22:54:16.463221   46713 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0914 22:54:16.464486   46713 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0914 22:54:16.465912   46713 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-930717" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:46:33 UTC, ends at Thu 2023-09-14 23:01:22 UTC. --
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.653628304Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9ea709bb4444541ec0e3dab990898a90b233a26eebdf05b73246815908b26f72,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-wb27t,Uid:41d83cd2-a4b5-4b49-99ac-2fa390769083,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731939633260235,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-wb27t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41d83cd2-a4b5-4b49-99ac-2fa390769083,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:52:19.307002626Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1c40fd3f-cdee-4408-87f1-c732015460c4,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731939519453400,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-14T22:52:19.185612969Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-ws5b8,Uid:8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731939365238991,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:52:17.522430953Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&PodSandboxMetadata{Name:kube-proxy-9gwgv,Uid:d702b24f-9d6e-4650-8892-0b
e54cb46991,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731937546612512,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:52:17.204568218Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-588699,Uid:a59901a40eaa5f9a78f2d9bc5208557c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731915587338311,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: a59901a40eaa5f9a78f2d9bc5208557c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a59901a40eaa5f9a78f2d9bc5208557c,kubernetes.io/config.seen: 2023-09-14T22:51:55.078627648Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-588699,Uid:38fc36a6071a7a2c7d0662f8c44c45c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731915581890055,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d0662f8c44c45c6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.205:2379,kubernetes.io/config.hash: 38fc36a6071a7a2c7d0662f8c44c45c6,kubernetes.io/config.seen: 2023-09-14T22:51:55.078619603Z,kubernetes.io/config.source: file,},Ru
ntimeHandler:,},&PodSandbox{Id:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-588699,Uid:e439c9af5f322909832e5f89900d71ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731915577668487,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e439c9af5f322909832e5f89900d71ab,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e439c9af5f322909832e5f89900d71ab,kubernetes.io/config.seen: 2023-09-14T22:51:55.078629128Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-588699,Uid:2555981d7842bbd1e687c979fbcfea59,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731915540231489,Labels
:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea59,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.205:8443,kubernetes.io/config.hash: 2555981d7842bbd1e687c979fbcfea59,kubernetes.io/config.seen: 2023-09-14T22:51:55.078625722Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=d1b642a0-12aa-4263-bf1b-fb944da6343b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.654448970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b0a28219-6c6a-4671-8e45-dc242141564d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.654499372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b0a28219-6c6a-4671-8e45-dc242141564d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.654716452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b0a28219-6c6a-4671-8e45-dc242141564d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.669642285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c22c465b-025d-41fe-bd17-d69331d74311 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.669700975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c22c465b-025d-41fe-bd17-d69331d74311 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.669855874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c22c465b-025d-41fe-bd17-d69331d74311 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.700859801Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f23c9395-4078-4da1-ab46-dab21e607a10 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.700917256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f23c9395-4078-4da1-ab46-dab21e607a10 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.701140635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f23c9395-4078-4da1-ab46-dab21e607a10 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.735573546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7278ba0d-d52e-4483-bd8f-5bcc1f3381fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.735637724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7278ba0d-d52e-4483-bd8f-5bcc1f3381fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.735838304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7278ba0d-d52e-4483-bd8f-5bcc1f3381fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.772085271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=28746a14-dcf0-48e5-9f35-1a426395943f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.772147262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=28746a14-dcf0-48e5-9f35-1a426395943f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.772391498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=28746a14-dcf0-48e5-9f35-1a426395943f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.806612943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1963de71-236f-4aee-96bf-e638ce5168d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.806717197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1963de71-236f-4aee-96bf-e638ce5168d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.806880865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1963de71-236f-4aee-96bf-e638ce5168d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.840331661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=404580c3-6e6f-41c8-8a38-0fe297832374 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.840391192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=404580c3-6e6f-41c8-8a38-0fe297832374 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.840549511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=404580c3-6e6f-41c8-8a38-0fe297832374 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.868398000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=367f5f14-ad0a-4c93-8197-626ce0e3a850 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.868461750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=367f5f14-ad0a-4c93-8197-626ce0e3a850 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:22 embed-certs-588699 crio[712]: time="2023-09-14 23:01:22.868683837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=367f5f14-ad0a-4c93-8197-626ce0e3a850 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	cbdeed7dded6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5e91bcbf6c9eb
	86f5cdd9f560f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   1ebc35026b2aa
	d2724572351c0       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   9 minutes ago       Running             kube-proxy                0                   2ff8c35b50ce4
	ab7d6b33e6b39       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   9 minutes ago       Running             kube-scheduler            2                   3140b81f7dffd
	6e4522f4466d1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   ee5258b32dd20
	28440a9764355       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   9 minutes ago       Running             kube-controller-manager   2                   e8195cecec00f
	e6f0ef2b040e6       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   9 minutes ago       Running             kube-apiserver            2                   016a9a89d6a9e
	
	* 
	* ==> coredns [86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44100 - 34745 "HINFO IN 5964101752069034912.334658549267858832. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008402338s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-588699
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-588699
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=embed-certs-588699
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_52_03_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:52:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-588699
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 23:01:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:57:30 +0000   Thu, 14 Sep 2023 22:51:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:57:30 +0000   Thu, 14 Sep 2023 22:51:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:57:30 +0000   Thu, 14 Sep 2023 22:51:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:57:30 +0000   Thu, 14 Sep 2023 22:52:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.205
	  Hostname:    embed-certs-588699
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 76fa946e45204ff4b777d25ef1a06f89
	  System UUID:                76fa946e-4520-4ff4-b777-d25ef1a06f89
	  Boot ID:                    25dee32c-d04d-4a7b-85ed-67595cf612f9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-ws5b8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-embed-certs-588699                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-588699             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-588699    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-9gwgv                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-embed-certs-588699             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-wb27t               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node embed-certs-588699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node embed-certs-588699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node embed-certs-588699 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node embed-certs-588699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node embed-certs-588699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node embed-certs-588699 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s                  kubelet          Node embed-certs-588699 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m20s                  kubelet          Node embed-certs-588699 status is now: NodeReady
	  Normal  RegisteredNode           9m7s                   node-controller  Node embed-certs-588699 event: Registered Node embed-certs-588699 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep14 22:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067588] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.397768] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.731309] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.138883] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.350390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.493875] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.132967] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.170458] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.121809] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.228973] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +17.467234] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[Sep14 22:47] kauditd_printk_skb: 29 callbacks suppressed
	[Sep14 22:51] systemd-fstab-generator[3472]: Ignoring "noauto" for root device
	[Sep14 22:52] systemd-fstab-generator[3794]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb] <==
	* {"level":"info","ts":"2023-09-14T22:51:58.072896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 switched to configuration voters=(5855946521106091300)"}
	{"level":"info","ts":"2023-09-14T22:51:58.073038Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2835eac8f11eb509","local-member-id":"51448055b6368d24","added-peer-id":"51448055b6368d24","added-peer-peer-urls":["https://192.168.61.205:2380"]}
	{"level":"info","ts":"2023-09-14T22:51:58.077234Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-14T22:51:58.077413Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"51448055b6368d24","initial-advertise-peer-urls":["https://192.168.61.205:2380"],"listen-peer-urls":["https://192.168.61.205:2380"],"advertise-client-urls":["https://192.168.61.205:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.205:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T22:51:58.077439Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T22:51:58.077501Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.205:2380"}
	{"level":"info","ts":"2023-09-14T22:51:58.077507Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.205:2380"}
	{"level":"info","ts":"2023-09-14T22:51:58.22024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-14T22:51:58.220347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-14T22:51:58.220382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 received MsgPreVoteResp from 51448055b6368d24 at term 1"}
	{"level":"info","ts":"2023-09-14T22:51:58.220411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 became candidate at term 2"}
	{"level":"info","ts":"2023-09-14T22:51:58.220436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 received MsgVoteResp from 51448055b6368d24 at term 2"}
	{"level":"info","ts":"2023-09-14T22:51:58.220463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 became leader at term 2"}
	{"level":"info","ts":"2023-09-14T22:51:58.220488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 51448055b6368d24 elected leader 51448055b6368d24 at term 2"}
	{"level":"info","ts":"2023-09-14T22:51:58.225437Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:51:58.227484Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"51448055b6368d24","local-member-attributes":"{Name:embed-certs-588699 ClientURLs:[https://192.168.61.205:2379]}","request-path":"/0/members/51448055b6368d24/attributes","cluster-id":"2835eac8f11eb509","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:51:58.227787Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:51:58.229051Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:51:58.231273Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2835eac8f11eb509","local-member-id":"51448055b6368d24","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:51:58.23183Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:51:58.231893Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:51:58.231503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:51:58.232292Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:51:58.232336Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T22:51:58.232894Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.205:2379"}
	
	* 
	* ==> kernel <==
	*  23:01:23 up 14 min,  0 users,  load average: 0.09, 0.14, 0.12
	Linux embed-certs-588699 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096] <==
	* W0914 22:57:01.319535       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:57:01.319688       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 22:57:01.320934       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 22:58:00.193865       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.66.124:443: connect: connection refused
	I0914 22:58:00.193920       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 22:58:01.319706       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:58:01.319819       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:58:01.319829       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 22:58:01.321677       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:58:01.321710       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 22:58:01.321717       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 22:59:00.193496       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.66.124:443: connect: connection refused
	I0914 22:59:00.193570       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 23:00:00.194449       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.66.124:443: connect: connection refused
	I0914 23:00:00.194738       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 23:00:01.320839       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:00:01.321036       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:00:01.321071       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 23:00:01.321955       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:00:01.322017       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 23:00:01.323217       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:01:00.193700       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.66.124:443: connect: connection refused
	I0914 23:01:00.193960       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f] <==
	* I0914 22:55:46.868406       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:56:16.401510       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:56:16.877632       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:56:46.407332       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:56:46.887282       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:57:16.412967       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:57:16.898284       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:57:46.419206       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:57:46.907418       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 22:58:15.709889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="373.836µs"
	E0914 22:58:16.426497       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:58:16.915746       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 22:58:30.707958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="107.622µs"
	E0914 22:58:46.432823       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:58:46.925041       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:59:16.439291       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:59:16.935322       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:59:46.445718       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:59:46.945541       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:00:16.458253       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:00:16.954856       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:00:46.464551       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:00:46.963646       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:01:16.470513       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:01:16.972270       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039] <==
	* I0914 22:52:19.163206       1 server_others.go:69] "Using iptables proxy"
	I0914 22:52:19.238652       1 node.go:141] Successfully retrieved node IP: 192.168.61.205
	I0914 22:52:19.431852       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:52:19.432112       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:52:19.445950       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:52:19.446354       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:52:19.446534       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:52:19.446725       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:52:19.448108       1 config.go:188] "Starting service config controller"
	I0914 22:52:19.448149       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:52:19.448310       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:52:19.448458       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:52:19.449118       1 config.go:315] "Starting node config controller"
	I0914 22:52:19.449260       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:52:19.552046       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:52:19.554875       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 22:52:19.568287       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6] <==
	* W0914 22:52:00.344812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 22:52:00.344846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0914 22:52:00.345127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:00.345726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 22:52:00.346224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:00.346411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 22:52:00.346535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 22:52:00.346571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0914 22:52:00.346636       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:00.346675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 22:52:00.346921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:52:00.347102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 22:52:00.347004       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:52:00.347454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 22:52:00.347047       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:52:00.347692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 22:52:01.402497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:01.402628       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 22:52:01.439823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:01.439949       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 22:52:01.586523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:52:01.586576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 22:52:01.825568       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 22:52:01.825631       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0914 22:52:03.633561       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:46:33 UTC, ends at Thu 2023-09-14 23:01:23 UTC. --
	Sep 14 22:58:45 embed-certs-588699 kubelet[3801]: E0914 22:58:45.699010    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 22:59:00 embed-certs-588699 kubelet[3801]: E0914 22:59:00.692762    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 22:59:03 embed-certs-588699 kubelet[3801]: E0914 22:59:03.726809    3801 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:59:03 embed-certs-588699 kubelet[3801]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:59:03 embed-certs-588699 kubelet[3801]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:59:03 embed-certs-588699 kubelet[3801]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:59:13 embed-certs-588699 kubelet[3801]: E0914 22:59:13.693602    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 22:59:24 embed-certs-588699 kubelet[3801]: E0914 22:59:24.692480    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 22:59:36 embed-certs-588699 kubelet[3801]: E0914 22:59:36.692629    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 22:59:47 embed-certs-588699 kubelet[3801]: E0914 22:59:47.693121    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:00:01 embed-certs-588699 kubelet[3801]: E0914 23:00:01.693486    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:00:03 embed-certs-588699 kubelet[3801]: E0914 23:00:03.728873    3801 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 23:00:03 embed-certs-588699 kubelet[3801]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:00:03 embed-certs-588699 kubelet[3801]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:00:03 embed-certs-588699 kubelet[3801]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:00:15 embed-certs-588699 kubelet[3801]: E0914 23:00:15.692789    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:00:27 embed-certs-588699 kubelet[3801]: E0914 23:00:27.692340    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:00:40 embed-certs-588699 kubelet[3801]: E0914 23:00:40.692647    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:00:52 embed-certs-588699 kubelet[3801]: E0914 23:00:52.692819    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:01:03 embed-certs-588699 kubelet[3801]: E0914 23:01:03.726710    3801 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 23:01:03 embed-certs-588699 kubelet[3801]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:01:03 embed-certs-588699 kubelet[3801]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:01:03 embed-certs-588699 kubelet[3801]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:01:07 embed-certs-588699 kubelet[3801]: E0914 23:01:07.696337    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:01:19 embed-certs-588699 kubelet[3801]: E0914 23:01:19.693066    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	
	* 
	* ==> storage-provisioner [cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee] <==
	* I0914 22:52:20.950437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:52:20.962893       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:52:20.963195       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:52:20.972727       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:52:20.973584       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-588699_308c6d6c-7d33-4cae-b328-30579a567551!
	I0914 22:52:20.972964       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2391c686-c332-4acf-99d9-c85e2955dd08", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-588699_308c6d6c-7d33-4cae-b328-30579a567551 became leader
	I0914 22:52:21.074612       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-588699_308c6d6c-7d33-4cae-b328-30579a567551!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-588699 -n embed-certs-588699
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-588699 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wb27t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-588699 describe pod metrics-server-57f55c9bc5-wb27t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-588699 describe pod metrics-server-57f55c9bc5-wb27t: exit status 1 (64.660785ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wb27t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-588699 describe pod metrics-server-57f55c9bc5-wb27t: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0914 22:53:32.188262   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-344363 -n no-preload-344363
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-14 23:01:27.985123114 +0000 UTC m=+5110.179464751
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-344363 -n no-preload-344363
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-344363 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-344363 logs -n 25: (1.512586655s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-711912                           | kubernetes-upgrade-711912    | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:36 UTC |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-344363             | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:40 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799144  | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC |                     |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-344363                  | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-588699            | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799144       | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-930717        | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:51 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-588699                 | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-930717             | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC | 14 Sep 23 22:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:45:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:45:20.513575   46713 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:45:20.513835   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.513847   46713 out.go:309] Setting ErrFile to fd 2...
	I0914 22:45:20.513852   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.514030   46713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:45:20.514571   46713 out.go:303] Setting JSON to false
	I0914 22:45:20.515550   46713 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5263,"bootTime":1694726258,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:45:20.515607   46713 start.go:138] virtualization: kvm guest
	I0914 22:45:20.517738   46713 out.go:177] * [old-k8s-version-930717] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:45:20.519301   46713 notify.go:220] Checking for updates...
	I0914 22:45:20.519309   46713 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:45:20.520886   46713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:45:20.522525   46713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:45:20.524172   46713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:45:20.525826   46713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:45:20.527204   46713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:45:20.529068   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:45:20.529489   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.529542   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.548088   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0914 22:45:20.548488   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.548969   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.548985   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.549404   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.549555   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.551507   46713 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 22:45:20.552878   46713 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:45:20.553145   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.553176   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.566825   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0914 22:45:20.567181   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.567617   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.567646   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.568018   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.568195   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.601886   46713 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:45:20.603176   46713 start.go:298] selected driver: kvm2
	I0914 22:45:20.603188   46713 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.603284   46713 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:45:20.603926   46713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.603997   46713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:45:20.617678   46713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:45:20.618009   46713 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:45:20.618045   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:45:20.618062   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:45:20.618075   46713 start_flags.go:321] config:
	{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.618204   46713 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.619892   46713 out.go:177] * Starting control plane node old-k8s-version-930717 in cluster old-k8s-version-930717
	I0914 22:45:22.939748   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:20.621146   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:45:20.621171   46713 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0914 22:45:20.621184   46713 cache.go:57] Caching tarball of preloaded images
	I0914 22:45:20.621265   46713 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 22:45:20.621286   46713 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0914 22:45:20.621381   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:45:20.621551   46713 start.go:365] acquiring machines lock for old-k8s-version-930717: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:45:29.019730   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:32.091705   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:38.171724   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:41.243661   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:47.323733   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:50.395751   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:56.475703   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:59.547782   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:46:02.551591   45954 start.go:369] acquired machines lock for "default-k8s-diff-port-799144" in 3m15.018428257s
	I0914 22:46:02.551631   45954 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:02.551642   45954 fix.go:54] fixHost starting: 
	I0914 22:46:02.551944   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:02.551972   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:02.566520   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0914 22:46:02.566922   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:02.567373   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:02.567392   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:02.567734   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:02.567961   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:02.568128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:02.569692   45954 fix.go:102] recreateIfNeeded on default-k8s-diff-port-799144: state=Stopped err=<nil>
	I0914 22:46:02.569714   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	W0914 22:46:02.569887   45954 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:02.571684   45954 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799144" ...
	I0914 22:46:02.549458   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:02.549490   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:46:02.551419   45407 machine.go:91] provisioned docker machine in 4m37.435317847s
	I0914 22:46:02.551457   45407 fix.go:56] fixHost completed within 4m37.455553972s
	I0914 22:46:02.551462   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 4m37.455581515s
	W0914 22:46:02.551502   45407 start.go:688] error starting host: provision: host is not running
	W0914 22:46:02.551586   45407 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0914 22:46:02.551600   45407 start.go:703] Will try again in 5 seconds ...
	I0914 22:46:02.573354   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Start
	I0914 22:46:02.573535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring networks are active...
	I0914 22:46:02.574326   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network default is active
	I0914 22:46:02.574644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network mk-default-k8s-diff-port-799144 is active
	I0914 22:46:02.575046   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Getting domain xml...
	I0914 22:46:02.575767   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Creating domain...
	I0914 22:46:03.792613   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting to get IP...
	I0914 22:46:03.793573   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.793932   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.794029   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:03.793928   46868 retry.go:31] will retry after 250.767464ms: waiting for machine to come up
	I0914 22:46:04.046447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046928   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.046853   46868 retry.go:31] will retry after 320.29371ms: waiting for machine to come up
	I0914 22:46:04.368383   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368782   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368814   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.368726   46868 retry.go:31] will retry after 295.479496ms: waiting for machine to come up
	I0914 22:46:04.666192   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666655   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666680   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.666595   46868 retry.go:31] will retry after 572.033699ms: waiting for machine to come up
	I0914 22:46:05.240496   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240920   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240953   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.240872   46868 retry.go:31] will retry after 493.557238ms: waiting for machine to come up
	I0914 22:46:05.735682   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736201   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.736150   46868 retry.go:31] will retry after 848.645524ms: waiting for machine to come up
	I0914 22:46:06.586116   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586568   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:06.586473   46868 retry.go:31] will retry after 866.110647ms: waiting for machine to come up
	I0914 22:46:07.553803   45407 start.go:365] acquiring machines lock for no-preload-344363: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:46:07.454431   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454798   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454827   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:07.454743   46868 retry.go:31] will retry after 1.485337575s: waiting for machine to come up
	I0914 22:46:08.941761   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942136   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942177   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:08.942104   46868 retry.go:31] will retry after 1.640651684s: waiting for machine to come up
	I0914 22:46:10.584576   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584939   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:10.584838   46868 retry.go:31] will retry after 1.656716681s: waiting for machine to come up
	I0914 22:46:12.243599   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244096   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:12.244037   46868 retry.go:31] will retry after 2.692733224s: waiting for machine to come up
	I0914 22:46:14.939726   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940035   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940064   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:14.939986   46868 retry.go:31] will retry after 2.745837942s: waiting for machine to come up
	I0914 22:46:22.180177   46412 start.go:369] acquired machines lock for "embed-certs-588699" in 2m3.238409394s
	I0914 22:46:22.180244   46412 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:22.180256   46412 fix.go:54] fixHost starting: 
	I0914 22:46:22.180661   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:22.180706   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:22.196558   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0914 22:46:22.196900   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:22.197304   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:46:22.197326   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:22.197618   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:22.197808   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:22.197986   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:46:22.199388   46412 fix.go:102] recreateIfNeeded on embed-certs-588699: state=Stopped err=<nil>
	I0914 22:46:22.199423   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	W0914 22:46:22.199595   46412 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:22.202757   46412 out.go:177] * Restarting existing kvm2 VM for "embed-certs-588699" ...
	I0914 22:46:17.687397   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687911   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687937   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:17.687878   46868 retry.go:31] will retry after 3.174192278s: waiting for machine to come up
	I0914 22:46:20.866173   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866687   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Found IP for machine: 192.168.50.175
	I0914 22:46:20.866722   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has current primary IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866737   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserving static IP address...
	I0914 22:46:20.867209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.867245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | skip adding static IP to network mk-default-k8s-diff-port-799144 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"}
	I0914 22:46:20.867263   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserved static IP address: 192.168.50.175
	I0914 22:46:20.867290   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for SSH to be available...
	I0914 22:46:20.867303   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Getting to WaitForSSH function...
	I0914 22:46:20.869597   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.869960   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.869993   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.870103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH client type: external
	I0914 22:46:20.870137   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa (-rw-------)
	I0914 22:46:20.870193   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:20.870218   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | About to run SSH command:
	I0914 22:46:20.870237   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | exit 0
	I0914 22:46:20.959125   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:20.959456   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetConfigRaw
	I0914 22:46:20.960082   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:20.962512   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.962889   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.962915   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.963114   45954 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/config.json ...
	I0914 22:46:20.963282   45954 machine.go:88] provisioning docker machine ...
	I0914 22:46:20.963300   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:20.963509   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963682   45954 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799144"
	I0914 22:46:20.963709   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963899   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:20.966359   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966728   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.966757   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966956   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:20.967146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967287   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967420   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:20.967584   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:20.967963   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:20.967983   45954 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799144 && echo "default-k8s-diff-port-799144" | sudo tee /etc/hostname
	I0914 22:46:21.098114   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799144
	
	I0914 22:46:21.098158   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.100804   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101167   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.101208   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.101532   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101855   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.102028   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.102386   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.102406   45954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799144/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:21.225929   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:21.225964   45954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:21.225992   45954 buildroot.go:174] setting up certificates
	I0914 22:46:21.226007   45954 provision.go:83] configureAuth start
	I0914 22:46:21.226023   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:21.226299   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:21.229126   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229514   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.229555   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.231683   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.231992   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.232027   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.232179   45954 provision.go:138] copyHostCerts
	I0914 22:46:21.232233   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:21.232247   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:21.232321   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:21.232412   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:21.232421   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:21.232446   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:21.232542   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:21.232551   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:21.232572   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:21.232617   45954 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799144 san=[192.168.50.175 192.168.50.175 localhost 127.0.0.1 minikube default-k8s-diff-port-799144]
	I0914 22:46:21.489180   45954 provision.go:172] copyRemoteCerts
	I0914 22:46:21.489234   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:21.489257   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.491989   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492308   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.492334   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.492734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.492869   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.493038   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:21.579991   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0914 22:46:21.599819   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:21.619391   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:21.638607   45954 provision.go:86] duration metric: configureAuth took 412.585328ms
	I0914 22:46:21.638629   45954 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:21.638797   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:21.638867   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.641693   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642033   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.642067   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.642399   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642562   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.642900   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.643239   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.643257   45954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:21.928913   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:21.928940   45954 machine.go:91] provisioned docker machine in 965.645328ms
	I0914 22:46:21.928952   45954 start.go:300] post-start starting for "default-k8s-diff-port-799144" (driver="kvm2")
	I0914 22:46:21.928964   45954 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:21.928987   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:21.929377   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:21.929425   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.931979   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932350   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.932388   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932475   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.932704   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.932923   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.933059   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.020329   45954 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:22.024444   45954 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:22.024458   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:22.024513   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:22.024589   45954 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:22.024672   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:22.033456   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:22.054409   45954 start.go:303] post-start completed in 125.445528ms
	I0914 22:46:22.054427   45954 fix.go:56] fixHost completed within 19.502785226s
	I0914 22:46:22.054444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.057353   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057690   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.057721   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057925   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.058139   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058304   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058483   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.058657   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:22.059051   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:22.059065   45954 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:22.180023   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731582.133636857
	
	I0914 22:46:22.180044   45954 fix.go:206] guest clock: 1694731582.133636857
	I0914 22:46:22.180054   45954 fix.go:219] Guest: 2023-09-14 22:46:22.133636857 +0000 UTC Remote: 2023-09-14 22:46:22.054430307 +0000 UTC m=+214.661061156 (delta=79.20655ms)
	I0914 22:46:22.180078   45954 fix.go:190] guest clock delta is within tolerance: 79.20655ms
	I0914 22:46:22.180084   45954 start.go:83] releasing machines lock for "default-k8s-diff-port-799144", held for 19.628473828s
	I0914 22:46:22.180114   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.180408   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:22.183182   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183507   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.183543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183675   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184175   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184384   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184494   45954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:22.184535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.184627   45954 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:22.184662   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.187447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187604   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187813   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.187839   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187971   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.187986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.188024   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.188151   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.188153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188344   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188391   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188500   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.188519   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188618   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.303009   45954 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:22.308185   45954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:22.450504   45954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:22.455642   45954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:22.455700   45954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:22.468430   45954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:22.468453   45954 start.go:469] detecting cgroup driver to use...
	I0914 22:46:22.468509   45954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:22.483524   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:22.494650   45954 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:22.494706   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:22.506589   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:22.518370   45954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:22.619545   45954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:22.737486   45954 docker.go:212] disabling docker service ...
	I0914 22:46:22.737551   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:22.749267   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:22.759866   45954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:22.868561   45954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:22.973780   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:22.986336   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:23.004987   45954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:23.005042   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.013821   45954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:23.013889   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.022487   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.030875   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.038964   45954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:23.047246   45954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:23.054339   45954 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:23.054379   45954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:23.066649   45954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:23.077024   45954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:23.174635   45954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:23.337031   45954 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:23.337113   45954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:23.342241   45954 start.go:537] Will wait 60s for crictl version
	I0914 22:46:23.342308   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:46:23.345832   45954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:23.377347   45954 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:23.377433   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.425559   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.492770   45954 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:22.203936   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Start
	I0914 22:46:22.204098   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring networks are active...
	I0914 22:46:22.204740   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network default is active
	I0914 22:46:22.205158   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network mk-embed-certs-588699 is active
	I0914 22:46:22.205524   46412 main.go:141] libmachine: (embed-certs-588699) Getting domain xml...
	I0914 22:46:22.206216   46412 main.go:141] libmachine: (embed-certs-588699) Creating domain...
	I0914 22:46:23.529479   46412 main.go:141] libmachine: (embed-certs-588699) Waiting to get IP...
	I0914 22:46:23.530274   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.530639   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.530694   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.530608   46986 retry.go:31] will retry after 299.617651ms: waiting for machine to come up
	I0914 22:46:23.494065   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:23.496974   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497458   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:23.497490   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497694   45954 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:23.501920   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:23.517500   45954 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:23.517542   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:23.554344   45954 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:23.554403   45954 ssh_runner.go:195] Run: which lz4
	I0914 22:46:23.558745   45954 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:23.563443   45954 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:23.563488   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:25.365372   45954 crio.go:444] Took 1.806660 seconds to copy over tarball
	I0914 22:46:25.365442   45954 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:23.832332   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.833457   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.833488   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.832911   46986 retry.go:31] will retry after 315.838121ms: waiting for machine to come up
	I0914 22:46:24.150532   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.150980   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.151009   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.150942   46986 retry.go:31] will retry after 369.928332ms: waiting for machine to come up
	I0914 22:46:24.522720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.523232   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.523257   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.523145   46986 retry.go:31] will retry after 533.396933ms: waiting for machine to come up
	I0914 22:46:25.057818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.058371   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.058405   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.058318   46986 retry.go:31] will retry after 747.798377ms: waiting for machine to come up
	I0914 22:46:25.807422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.807912   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.807956   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.807874   46986 retry.go:31] will retry after 947.037376ms: waiting for machine to come up
	I0914 22:46:26.756214   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:26.756720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:26.756757   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:26.756689   46986 retry.go:31] will retry after 1.117164865s: waiting for machine to come up
	I0914 22:46:27.875432   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:27.875931   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:27.875953   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:27.875886   46986 retry.go:31] will retry after 1.117181084s: waiting for machine to come up
	I0914 22:46:28.197684   45954 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.832216899s)
	I0914 22:46:28.197710   45954 crio.go:451] Took 2.832313 seconds to extract the tarball
	I0914 22:46:28.197718   45954 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:28.236545   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:28.286349   45954 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:28.286374   45954 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:28.286449   45954 ssh_runner.go:195] Run: crio config
	I0914 22:46:28.344205   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:28.344231   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:28.344253   45954 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:28.344289   45954 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.175 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799144 NodeName:default-k8s-diff-port-799144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:28.344454   45954 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.175
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799144"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:28.344536   45954 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-799144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0914 22:46:28.344591   45954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:28.354383   45954 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:28.354459   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:28.363277   45954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0914 22:46:28.378875   45954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:28.393535   45954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0914 22:46:28.408319   45954 ssh_runner.go:195] Run: grep 192.168.50.175	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:28.411497   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:28.421507   45954 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144 for IP: 192.168.50.175
	I0914 22:46:28.421536   45954 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:28.421702   45954 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:28.421742   45954 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:28.421805   45954 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.key
	I0914 22:46:28.421858   45954 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key.0216c1e7
	I0914 22:46:28.421894   45954 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key
	I0914 22:46:28.421994   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:28.422020   45954 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:28.422027   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:28.422048   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:28.422074   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:28.422095   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:28.422139   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:28.422695   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:28.443528   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:46:28.463679   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:28.483317   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:28.503486   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:28.523709   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:28.544539   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:28.565904   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:28.587316   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:28.611719   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:28.632158   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:28.652227   45954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:28.667709   45954 ssh_runner.go:195] Run: openssl version
	I0914 22:46:28.673084   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:28.682478   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686693   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686747   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.691836   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:28.701203   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:28.710996   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715353   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715408   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.720765   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:28.730750   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:28.740782   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745186   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745250   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.750589   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:28.760675   45954 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:28.764920   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:28.770573   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:28.776098   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:28.783455   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:28.790699   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:28.797514   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:28.804265   45954 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-799144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:28.804376   45954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:28.804427   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:28.833994   45954 cri.go:89] found id: ""
	I0914 22:46:28.834051   45954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:28.843702   45954 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:28.843724   45954 kubeadm.go:636] restartCluster start
	I0914 22:46:28.843769   45954 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:28.852802   45954 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.854420   45954 kubeconfig.go:92] found "default-k8s-diff-port-799144" server: "https://192.168.50.175:8444"
	I0914 22:46:28.858058   45954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:28.866914   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.866968   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.877946   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.877969   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.878014   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.888579   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.389311   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.389420   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.401725   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.889346   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.889451   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.902432   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.388985   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.389062   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.401302   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.888853   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.888949   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.901032   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.389622   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.389733   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.405102   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.888685   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.888803   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.904300   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:32.388876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.388944   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.402419   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.995080   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:28.999205   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:28.999224   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:28.995414   46986 retry.go:31] will retry after 1.657878081s: waiting for machine to come up
	I0914 22:46:30.655422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:30.656029   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:30.656059   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:30.655960   46986 retry.go:31] will retry after 2.320968598s: waiting for machine to come up
	I0914 22:46:32.978950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:32.979423   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:32.979452   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:32.979369   46986 retry.go:31] will retry after 2.704173643s: waiting for machine to come up
	I0914 22:46:32.889585   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.889658   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.902514   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.388806   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.388906   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.405028   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.889633   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.889728   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.906250   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.388736   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.388810   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.403376   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.888851   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.888934   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.905873   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.389446   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.389516   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.404872   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.889475   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.889569   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.902431   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.388954   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.389054   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.401778   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.889442   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.889529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.902367   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:37.388925   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.389009   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.401860   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.685608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:35.686027   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:35.686064   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:35.685964   46986 retry.go:31] will retry after 2.240780497s: waiting for machine to come up
	I0914 22:46:37.928020   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:37.928402   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:37.928442   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:37.928354   46986 retry.go:31] will retry after 2.734049647s: waiting for machine to come up
	I0914 22:46:41.860186   46713 start.go:369] acquired machines lock for "old-k8s-version-930717" in 1m21.238611742s
	I0914 22:46:41.860234   46713 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:41.860251   46713 fix.go:54] fixHost starting: 
	I0914 22:46:41.860683   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:41.860738   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:41.877474   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0914 22:46:41.877964   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:41.878542   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:46:41.878568   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:41.878874   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:41.879057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:46:41.879276   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:46:41.880990   46713 fix.go:102] recreateIfNeeded on old-k8s-version-930717: state=Stopped err=<nil>
	I0914 22:46:41.881019   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	W0914 22:46:41.881175   46713 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:41.883128   46713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-930717" ...
	I0914 22:46:37.888876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.888950   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.901522   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.389056   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:38.389140   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:38.400632   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.867426   45954 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:38.867461   45954 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:38.867487   45954 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:38.867557   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:38.898268   45954 cri.go:89] found id: ""
	I0914 22:46:38.898328   45954 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:38.914871   45954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:38.924737   45954 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:38.924785   45954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934436   45954 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934455   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.042672   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.982954   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.158791   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.235541   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.312855   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:46:40.312926   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.328687   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.842859   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.343019   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.842336   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.342351   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.665315   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.665775   46412 main.go:141] libmachine: (embed-certs-588699) Found IP for machine: 192.168.61.205
	I0914 22:46:40.665795   46412 main.go:141] libmachine: (embed-certs-588699) Reserving static IP address...
	I0914 22:46:40.665807   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has current primary IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.666273   46412 main.go:141] libmachine: (embed-certs-588699) Reserved static IP address: 192.168.61.205
	I0914 22:46:40.666316   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.666334   46412 main.go:141] libmachine: (embed-certs-588699) Waiting for SSH to be available...
	I0914 22:46:40.666375   46412 main.go:141] libmachine: (embed-certs-588699) DBG | skip adding static IP to network mk-embed-certs-588699 - found existing host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"}
	I0914 22:46:40.666401   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Getting to WaitForSSH function...
	I0914 22:46:40.668206   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668515   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.668542   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668654   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH client type: external
	I0914 22:46:40.668689   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa (-rw-------)
	I0914 22:46:40.668716   46412 main.go:141] libmachine: (embed-certs-588699) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:40.668728   46412 main.go:141] libmachine: (embed-certs-588699) DBG | About to run SSH command:
	I0914 22:46:40.668736   46412 main.go:141] libmachine: (embed-certs-588699) DBG | exit 0
	I0914 22:46:40.751202   46412 main.go:141] libmachine: (embed-certs-588699) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:40.751584   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetConfigRaw
	I0914 22:46:40.752291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:40.754685   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755054   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.755087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755318   46412 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/config.json ...
	I0914 22:46:40.755578   46412 machine.go:88] provisioning docker machine ...
	I0914 22:46:40.755603   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:40.755799   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.755940   46412 buildroot.go:166] provisioning hostname "embed-certs-588699"
	I0914 22:46:40.755959   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.756109   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.758111   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758435   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.758481   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758547   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.758686   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758798   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758983   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.759108   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.759567   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.759586   46412 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-588699 && echo "embed-certs-588699" | sudo tee /etc/hostname
	I0914 22:46:40.882559   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-588699
	
	I0914 22:46:40.882615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.885741   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.886137   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886403   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.886635   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886810   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886964   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.887176   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.887633   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.887662   46412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-588699' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-588699/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-588699' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:41.007991   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:41.008024   46412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:41.008075   46412 buildroot.go:174] setting up certificates
	I0914 22:46:41.008103   46412 provision.go:83] configureAuth start
	I0914 22:46:41.008118   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:41.008615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.011893   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012262   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.012295   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012467   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.014904   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015343   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.015378   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015551   46412 provision.go:138] copyHostCerts
	I0914 22:46:41.015605   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:41.015618   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:41.015691   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:41.015847   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:41.015864   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:41.015897   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:41.015979   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:41.015989   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:41.016019   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:41.016080   46412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.embed-certs-588699 san=[192.168.61.205 192.168.61.205 localhost 127.0.0.1 minikube embed-certs-588699]
	I0914 22:46:41.134486   46412 provision.go:172] copyRemoteCerts
	I0914 22:46:41.134537   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:41.134559   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.137472   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137789   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.137818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137995   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.138216   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.138365   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.138536   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.224196   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:41.244551   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:46:41.267745   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:41.292472   46412 provision.go:86] duration metric: configureAuth took 284.355734ms
	I0914 22:46:41.292497   46412 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:41.292668   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:41.292748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.295661   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296010   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.296042   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296246   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.296469   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296652   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296836   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.297031   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.297522   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.297556   46412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:41.609375   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:41.609417   46412 machine.go:91] provisioned docker machine in 853.82264ms
	I0914 22:46:41.609431   46412 start.go:300] post-start starting for "embed-certs-588699" (driver="kvm2")
	I0914 22:46:41.609444   46412 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:41.609472   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.609831   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:41.609890   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.613037   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613497   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.613525   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613662   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.613854   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.614023   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.614142   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.704618   46412 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:41.709759   46412 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:41.709787   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:41.709867   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:41.709991   46412 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:41.710127   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:41.721261   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:41.742359   46412 start.go:303] post-start completed in 132.913862ms
	I0914 22:46:41.742387   46412 fix.go:56] fixHost completed within 19.562130605s
	I0914 22:46:41.742418   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.745650   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.746172   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746369   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.746564   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746781   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746944   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.747138   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.747629   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.747648   46412 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:41.860006   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731601.811427748
	
	I0914 22:46:41.860030   46412 fix.go:206] guest clock: 1694731601.811427748
	I0914 22:46:41.860040   46412 fix.go:219] Guest: 2023-09-14 22:46:41.811427748 +0000 UTC Remote: 2023-09-14 22:46:41.742391633 +0000 UTC m=+142.955285980 (delta=69.036115ms)
	I0914 22:46:41.860091   46412 fix.go:190] guest clock delta is within tolerance: 69.036115ms
	I0914 22:46:41.860098   46412 start.go:83] releasing machines lock for "embed-certs-588699", held for 19.679882828s
	I0914 22:46:41.860131   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.860411   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.863136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863584   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.863618   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863721   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864206   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864398   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864477   46412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:41.864514   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.864639   46412 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:41.864666   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.867568   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.867976   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.868028   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868147   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868248   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868373   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868579   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.868691   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868833   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.868876   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.869026   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.980624   46412 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:41.986113   46412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:42.134956   46412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:42.141030   46412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:42.141101   46412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:42.158635   46412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:42.158660   46412 start.go:469] detecting cgroup driver to use...
	I0914 22:46:42.158722   46412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:42.173698   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:42.184948   46412 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:42.185007   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:42.196434   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:42.208320   46412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:42.326624   46412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:42.459498   46412 docker.go:212] disabling docker service ...
	I0914 22:46:42.459567   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:42.472479   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:42.486651   46412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:42.636161   46412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:42.739841   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:42.758562   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:42.779404   46412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:42.779472   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.787902   46412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:42.787954   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.799513   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.811428   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.823348   46412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:42.835569   46412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:42.842820   46412 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:42.842885   46412 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:42.855225   46412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:42.863005   46412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:42.979756   46412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:43.181316   46412 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:43.181384   46412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:43.191275   46412 start.go:537] Will wait 60s for crictl version
	I0914 22:46:43.191343   46412 ssh_runner.go:195] Run: which crictl
	I0914 22:46:43.196264   46412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:43.228498   46412 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:43.228589   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.281222   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.341816   46412 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:43.343277   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:43.346473   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.346835   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:43.346882   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.347084   46412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:43.351205   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:43.364085   46412 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:43.364156   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:43.400558   46412 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:43.400634   46412 ssh_runner.go:195] Run: which lz4
	I0914 22:46:43.404906   46412 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:43.409239   46412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:43.409277   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:41.885236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Start
	I0914 22:46:41.885399   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring networks are active...
	I0914 22:46:41.886125   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network default is active
	I0914 22:46:41.886511   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network mk-old-k8s-version-930717 is active
	I0914 22:46:41.886855   46713 main.go:141] libmachine: (old-k8s-version-930717) Getting domain xml...
	I0914 22:46:41.887524   46713 main.go:141] libmachine: (old-k8s-version-930717) Creating domain...
	I0914 22:46:43.317748   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting to get IP...
	I0914 22:46:43.318757   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.319197   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.319288   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.319176   47160 retry.go:31] will retry after 287.487011ms: waiting for machine to come up
	I0914 22:46:43.608890   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.609712   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.609738   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.609656   47160 retry.go:31] will retry after 289.187771ms: waiting for machine to come up
	I0914 22:46:43.900234   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.900655   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.900679   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.900576   47160 retry.go:31] will retry after 433.007483ms: waiting for machine to come up
	I0914 22:46:44.335318   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.335775   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.335804   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.335727   47160 retry.go:31] will retry after 383.295397ms: waiting for machine to come up
	I0914 22:46:44.720415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.720967   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.721001   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.720856   47160 retry.go:31] will retry after 698.454643ms: waiting for machine to come up
	I0914 22:46:45.420833   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:45.421349   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:45.421391   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:45.421297   47160 retry.go:31] will retry after 938.590433ms: waiting for machine to come up
	I0914 22:46:42.842954   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.867206   45954 api_server.go:72] duration metric: took 2.554352134s to wait for apiserver process to appear ...
	I0914 22:46:42.867238   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:46:42.867257   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.755748   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:46:46.755780   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:46:46.755832   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.873209   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:46.873243   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.373637   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.391311   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.391349   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.873646   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.880286   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.880323   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:48.373423   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:48.389682   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:46:48.415694   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:46:48.415727   45954 api_server.go:131] duration metric: took 5.548481711s to wait for apiserver health ...
	I0914 22:46:48.415739   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.415748   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.417375   45954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:46:45.238555   46412 crio.go:444] Took 1.833681 seconds to copy over tarball
	I0914 22:46:45.238634   46412 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:48.251155   46412 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012492519s)
	I0914 22:46:48.251176   46412 crio.go:451] Took 3.012596 seconds to extract the tarball
	I0914 22:46:48.251184   46412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:48.290336   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:48.338277   46412 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:48.338302   46412 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:48.338378   46412 ssh_runner.go:195] Run: crio config
	I0914 22:46:48.402542   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.402564   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.402583   46412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:48.402604   46412 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.205 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-588699 NodeName:embed-certs-588699 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:48.402791   46412 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-588699"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:48.402883   46412 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-588699 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:46:48.402958   46412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:48.414406   46412 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:48.414484   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:48.426437   46412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0914 22:46:48.445351   46412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:48.463696   46412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0914 22:46:48.481887   46412 ssh_runner.go:195] Run: grep 192.168.61.205	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:48.485825   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:48.500182   46412 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699 for IP: 192.168.61.205
	I0914 22:46:48.500215   46412 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:48.500362   46412 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:48.500417   46412 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:48.500514   46412 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/client.key
	I0914 22:46:48.500600   46412 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key.8dac69f7
	I0914 22:46:48.500726   46412 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key
	I0914 22:46:48.500885   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:48.500926   46412 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:48.500942   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:48.500976   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:48.501008   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:48.501039   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:48.501096   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:48.501918   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:48.528790   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:46:48.558557   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:48.583664   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:48.608274   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:48.631638   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:48.655163   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:48.677452   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:48.700443   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:48.724547   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:48.751559   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:48.778910   46412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:48.794369   46412 ssh_runner.go:195] Run: openssl version
	I0914 22:46:48.799778   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:48.809263   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814790   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814848   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.820454   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:48.829942   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:46.361228   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:46.361816   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:46.361846   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:46.361795   47160 retry.go:31] will retry after 1.00738994s: waiting for machine to come up
	I0914 22:46:47.370525   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:47.370964   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:47.370991   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:47.370921   47160 retry.go:31] will retry after 1.441474351s: waiting for machine to come up
	I0914 22:46:48.813921   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:48.814415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:48.814447   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:48.814362   47160 retry.go:31] will retry after 1.497562998s: waiting for machine to come up
	I0914 22:46:50.313674   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:50.314191   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:50.314221   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:50.314137   47160 retry.go:31] will retry after 1.620308161s: waiting for machine to come up
	I0914 22:46:48.418825   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:46:48.456715   45954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:46:48.496982   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:46:48.515172   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:46:48.515209   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:46:48.515223   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:46:48.515234   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:46:48.515247   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:46:48.515261   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:46:48.515272   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:46:48.515285   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:46:48.515295   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:46:48.515307   45954 system_pods.go:74] duration metric: took 18.305048ms to wait for pod list to return data ...
	I0914 22:46:48.515320   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:46:48.518842   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:46:48.518875   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:46:48.518888   45954 node_conditions.go:105] duration metric: took 3.562448ms to run NodePressure ...
	I0914 22:46:48.518908   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:50.951051   45954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.432118027s)
	I0914 22:46:50.951087   45954 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959708   45954 kubeadm.go:787] kubelet initialised
	I0914 22:46:50.959735   45954 kubeadm.go:788] duration metric: took 8.637125ms waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959745   45954 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:50.966214   45954 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.975076   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975106   45954 pod_ready.go:81] duration metric: took 8.863218ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.975118   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975129   45954 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.982438   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982471   45954 pod_ready.go:81] duration metric: took 7.330437ms waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.982485   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982493   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.991067   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991102   45954 pod_ready.go:81] duration metric: took 8.574268ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.991115   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991125   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.006696   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006732   45954 pod_ready.go:81] duration metric: took 15.595604ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.006745   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006755   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.354645   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354678   45954 pod_ready.go:81] duration metric: took 347.913938ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.354690   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354702   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.754959   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.754998   45954 pod_ready.go:81] duration metric: took 400.283619ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.755012   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.755022   45954 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:52.156253   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156299   45954 pod_ready.go:81] duration metric: took 401.260791ms waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:52.156314   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156327   45954 pod_ready.go:38] duration metric: took 1.196571114s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:52.156352   45954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:46:52.169026   45954 ops.go:34] apiserver oom_adj: -16
	I0914 22:46:52.169049   45954 kubeadm.go:640] restartCluster took 23.325317121s
	I0914 22:46:52.169059   45954 kubeadm.go:406] StartCluster complete in 23.364799998s
	I0914 22:46:52.169079   45954 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.169161   45954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:46:52.171787   45954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.172077   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:46:52.172229   45954 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:46:52.172310   45954 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172332   45954 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-799144"
	I0914 22:46:52.172325   45954 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799144"
	W0914 22:46:52.172340   45954 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:46:52.172347   45954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799144"
	I0914 22:46:52.172351   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:52.172394   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.172394   45954 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172424   45954 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.172436   45954 addons.go:240] addon metrics-server should already be in state true
	I0914 22:46:52.172500   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.173205   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173252   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173383   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173451   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173744   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173822   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.178174   45954 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-799144" context rescaled to 1 replicas
	I0914 22:46:52.178208   45954 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:46:52.180577   45954 out.go:177] * Verifying Kubernetes components...
	I0914 22:46:52.182015   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:46:52.194030   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0914 22:46:52.194040   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38817
	I0914 22:46:52.194506   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.194767   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.195059   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195078   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195219   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195235   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195420   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.195642   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.195715   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.196346   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.196392   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.198560   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0914 22:46:52.199130   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.199612   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.199641   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.199995   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.200530   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.200575   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.206536   45954 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.206558   45954 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:46:52.206584   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.206941   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.206973   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.215857   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0914 22:46:52.216266   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.216801   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.216825   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.217297   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.217484   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.220211   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40683
	I0914 22:46:52.220740   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.221296   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.221314   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.221798   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.221986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.222185   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.224162   45954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:46:52.224261   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.225483   45954 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.225494   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:46:52.225511   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.225526   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0914 22:46:52.227067   45954 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:46:52.225976   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.228337   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:46:52.228354   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:46:52.228373   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.228750   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.228764   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.228959   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229601   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.229674   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.229702   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229908   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.230068   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.230171   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.230203   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.230280   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.230503   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.232673   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233097   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.233153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.233536   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.233684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.233821   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.251500   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43473
	I0914 22:46:52.252069   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.252702   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.252722   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.253171   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.253419   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.255233   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.255574   45954 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.255591   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:46:52.255609   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.258620   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.259178   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259379   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.259584   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.259754   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.259961   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.350515   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.367291   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:46:52.367309   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:46:52.413141   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:46:52.413170   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:46:52.419647   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.462672   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:52.462698   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:46:52.519331   45954 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:46:52.519330   45954 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:52.530851   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:53.719523   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368967292s)
	I0914 22:46:53.719575   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719582   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299890259s)
	I0914 22:46:53.719616   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719638   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.719589   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720079   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720083   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720097   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720101   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720107   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720111   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720121   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720080   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720404   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720414   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720425   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720501   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720525   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720538   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720553   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720804   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720822   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.721724   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.190817165s)
	I0914 22:46:53.721771   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.721784   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.722084   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.722100   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.722089   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.722115   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.722128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.723592   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.723602   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.723614   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.723631   45954 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-799144"
	I0914 22:46:53.725666   45954 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:46:48.840421   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.179960   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.180026   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.185490   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:49.194744   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:49.205937   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210532   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210582   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.215917   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:49.225393   46412 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:49.229604   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:49.234795   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:49.239907   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:49.245153   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:49.250558   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:49.256142   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:49.261518   46412 kubeadm.go:404] StartCluster: {Name:embed-certs-588699 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:49.261618   46412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:49.261687   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:49.291460   46412 cri.go:89] found id: ""
	I0914 22:46:49.291560   46412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:49.300496   46412 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:49.300558   46412 kubeadm.go:636] restartCluster start
	I0914 22:46:49.300616   46412 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:49.309827   46412 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.311012   46412 kubeconfig.go:92] found "embed-certs-588699" server: "https://192.168.61.205:8443"
	I0914 22:46:49.313336   46412 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:49.321470   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.321528   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.332257   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.332275   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.332320   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.345427   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.846146   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.846240   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.859038   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.345492   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.345583   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.358070   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.845544   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.845605   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.861143   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.345602   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.345675   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.357406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.845964   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.846082   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.860079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.346093   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.346159   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.360952   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.845612   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.845717   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.860504   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:53.345991   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.360947   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.936297   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:51.936809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:51.936840   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:51.936747   47160 retry.go:31] will retry after 2.284330296s: waiting for machine to come up
	I0914 22:46:54.222960   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:54.223478   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:54.223530   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:54.223417   47160 retry.go:31] will retry after 3.537695113s: waiting for machine to come up
	I0914 22:46:53.726984   45954 addons.go:502] enable addons completed in 1.554762762s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:46:54.641725   45954 node_ready.go:58] node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:57.141217   45954 node_ready.go:49] node "default-k8s-diff-port-799144" has status "Ready":"True"
	I0914 22:46:57.141240   45954 node_ready.go:38] duration metric: took 4.621872993s waiting for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:57.141250   45954 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:57.151019   45954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162159   45954 pod_ready.go:92] pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace has status "Ready":"True"
	I0914 22:46:57.162180   45954 pod_ready.go:81] duration metric: took 11.133949ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162189   45954 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:53.845734   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.845815   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.858406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.346078   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.346138   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.360079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.845738   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.845801   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.861945   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.346533   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.346627   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.360445   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.845577   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.845681   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.856800   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.346374   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.346461   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.357724   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.846264   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.846376   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.857963   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.346006   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.357336   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.845877   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.845944   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.857310   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:58.345855   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.345925   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.357766   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.762315   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:57.762689   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:57.762714   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:57.762651   47160 retry.go:31] will retry after 3.773493672s: waiting for machine to come up
	I0914 22:46:59.185077   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:01.185320   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:02.912525   45407 start.go:369] acquired machines lock for "no-preload-344363" in 55.358672707s
	I0914 22:47:02.912580   45407 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:47:02.912592   45407 fix.go:54] fixHost starting: 
	I0914 22:47:02.913002   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:47:02.913035   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:47:02.932998   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0914 22:47:02.933535   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:47:02.933956   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:47:02.933977   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:47:02.934303   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:47:02.934484   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:02.934627   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:47:02.936412   45407 fix.go:102] recreateIfNeeded on no-preload-344363: state=Stopped err=<nil>
	I0914 22:47:02.936438   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	W0914 22:47:02.936601   45407 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:47:02.938235   45407 out.go:177] * Restarting existing kvm2 VM for "no-preload-344363" ...
	I0914 22:46:58.845728   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.845806   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.859436   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:59.322167   46412 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:59.322206   46412 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:59.322218   46412 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:59.322278   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:59.352268   46412 cri.go:89] found id: ""
	I0914 22:46:59.352371   46412 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:59.366742   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:59.374537   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:59.374598   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382227   46412 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382251   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:59.486171   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.268311   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.462362   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.528925   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.601616   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:00.601697   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:00.623311   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.140972   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.640574   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.141044   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.640374   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.140881   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.166662   46412 api_server.go:72] duration metric: took 2.565044214s to wait for apiserver process to appear ...
	I0914 22:47:03.166688   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:03.166703   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:01.540578   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541058   46713 main.go:141] libmachine: (old-k8s-version-930717) Found IP for machine: 192.168.72.70
	I0914 22:47:01.541095   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has current primary IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541106   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserving static IP address...
	I0914 22:47:01.541552   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserved static IP address: 192.168.72.70
	I0914 22:47:01.541579   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting for SSH to be available...
	I0914 22:47:01.541613   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.541646   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | skip adding static IP to network mk-old-k8s-version-930717 - found existing host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"}
	I0914 22:47:01.541672   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Getting to WaitForSSH function...
	I0914 22:47:01.543898   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544285   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.544317   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544428   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH client type: external
	I0914 22:47:01.544451   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa (-rw-------)
	I0914 22:47:01.544499   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:01.544518   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | About to run SSH command:
	I0914 22:47:01.544552   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | exit 0
	I0914 22:47:01.639336   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:01.639694   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetConfigRaw
	I0914 22:47:01.640324   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.642979   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643345   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.643389   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643643   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:47:01.643833   46713 machine.go:88] provisioning docker machine ...
	I0914 22:47:01.643855   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:01.644085   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644249   46713 buildroot.go:166] provisioning hostname "old-k8s-version-930717"
	I0914 22:47:01.644272   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644434   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.646429   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.646771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.646819   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.647008   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.647209   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647360   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647536   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.647737   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.648245   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.648270   46713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-930717 && echo "old-k8s-version-930717" | sudo tee /etc/hostname
	I0914 22:47:01.789438   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-930717
	
	I0914 22:47:01.789472   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.792828   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793229   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.793277   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793459   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.793644   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793778   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793953   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.794120   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.794459   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.794478   46713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-930717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-930717/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-930717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:01.928496   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:01.928536   46713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:01.928567   46713 buildroot.go:174] setting up certificates
	I0914 22:47:01.928586   46713 provision.go:83] configureAuth start
	I0914 22:47:01.928609   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.928914   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.931976   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932368   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.932398   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932542   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.934939   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935311   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.935344   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935480   46713 provision.go:138] copyHostCerts
	I0914 22:47:01.935537   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:01.935548   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:01.935620   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:01.935775   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:01.935789   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:01.935824   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:01.935970   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:01.935981   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:01.936010   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:01.936086   46713 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-930717 san=[192.168.72.70 192.168.72.70 localhost 127.0.0.1 minikube old-k8s-version-930717]
	I0914 22:47:02.167446   46713 provision.go:172] copyRemoteCerts
	I0914 22:47:02.167510   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:02.167534   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.170442   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.170862   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.170900   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.171089   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.171302   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.171496   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.171645   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.267051   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:02.289098   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 22:47:02.312189   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:02.334319   46713 provision.go:86] duration metric: configureAuth took 405.716896ms
	I0914 22:47:02.334346   46713 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:02.334555   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:47:02.334638   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.337255   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337605   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.337637   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.337949   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338100   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338240   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.338384   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.338859   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.338890   46713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:02.654307   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:02.654332   46713 machine.go:91] provisioned docker machine in 1.010485195s
	I0914 22:47:02.654345   46713 start.go:300] post-start starting for "old-k8s-version-930717" (driver="kvm2")
	I0914 22:47:02.654358   46713 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:02.654382   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.654747   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:02.654782   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.657773   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658153   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.658182   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658425   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.658630   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.658812   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.659001   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.750387   46713 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:02.754444   46713 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:02.754468   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:02.754545   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:02.754654   46713 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:02.754762   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:02.765781   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:02.788047   46713 start.go:303] post-start completed in 133.686385ms
	I0914 22:47:02.788072   46713 fix.go:56] fixHost completed within 20.927830884s
	I0914 22:47:02.788098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.791051   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791408   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.791441   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791628   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.791840   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792041   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792215   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.792383   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.792817   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.792836   46713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:02.912359   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731622.856601606
	
	I0914 22:47:02.912381   46713 fix.go:206] guest clock: 1694731622.856601606
	I0914 22:47:02.912391   46713 fix.go:219] Guest: 2023-09-14 22:47:02.856601606 +0000 UTC Remote: 2023-09-14 22:47:02.788077838 +0000 UTC m=+102.306332554 (delta=68.523768ms)
	I0914 22:47:02.912413   46713 fix.go:190] guest clock delta is within tolerance: 68.523768ms
	I0914 22:47:02.912424   46713 start.go:83] releasing machines lock for "old-k8s-version-930717", held for 21.052207532s
	I0914 22:47:02.912457   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.912730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:02.915769   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916200   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.916265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916453   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917073   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917245   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917352   46713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:02.917397   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.917535   46713 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:02.917563   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.920256   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920363   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920656   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920695   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920724   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920744   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920959   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921261   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921282   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921431   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921489   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921567   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.921635   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:03.014070   46713 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:03.047877   46713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:03.192347   46713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:03.200249   46713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:03.200324   46713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:03.215110   46713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:03.215138   46713 start.go:469] detecting cgroup driver to use...
	I0914 22:47:03.215201   46713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:03.228736   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:03.241326   46713 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:03.241377   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:03.253001   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:03.264573   46713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:03.371107   46713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:03.512481   46713 docker.go:212] disabling docker service ...
	I0914 22:47:03.512554   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:03.526054   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:03.537583   46713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:03.662087   46713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:03.793448   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:03.807574   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:03.828240   46713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0914 22:47:03.828311   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.842435   46713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:03.842490   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.856199   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.867448   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.878222   46713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:03.891806   46713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:03.899686   46713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:03.899740   46713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:03.912584   46713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:03.920771   46713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:04.040861   46713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:04.230077   46713 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:04.230147   46713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:04.235664   46713 start.go:537] Will wait 60s for crictl version
	I0914 22:47:04.235726   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:04.239737   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:04.279680   46713 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:04.279755   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.329363   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.389025   46713 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0914 22:47:02.939505   45407 main.go:141] libmachine: (no-preload-344363) Calling .Start
	I0914 22:47:02.939701   45407 main.go:141] libmachine: (no-preload-344363) Ensuring networks are active...
	I0914 22:47:02.940415   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network default is active
	I0914 22:47:02.940832   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network mk-no-preload-344363 is active
	I0914 22:47:02.941287   45407 main.go:141] libmachine: (no-preload-344363) Getting domain xml...
	I0914 22:47:02.942103   45407 main.go:141] libmachine: (no-preload-344363) Creating domain...
	I0914 22:47:04.410207   45407 main.go:141] libmachine: (no-preload-344363) Waiting to get IP...
	I0914 22:47:04.411192   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.411669   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.411744   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.411647   47373 retry.go:31] will retry after 198.435142ms: waiting for machine to come up
	I0914 22:47:04.612435   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.612957   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.613025   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.612934   47373 retry.go:31] will retry after 350.950211ms: waiting for machine to come up
	I0914 22:47:04.965570   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.966332   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.966458   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.966377   47373 retry.go:31] will retry after 398.454996ms: waiting for machine to come up
	I0914 22:47:04.390295   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:04.393815   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394249   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:04.394282   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394543   46713 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:04.398850   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:04.411297   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:47:04.411363   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:04.443950   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:04.444023   46713 ssh_runner.go:195] Run: which lz4
	I0914 22:47:04.448422   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:47:04.453479   46713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:47:04.453505   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0914 22:47:03.686086   45954 pod_ready.go:92] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.686112   45954 pod_ready.go:81] duration metric: took 6.523915685s waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.686125   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692434   45954 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.692454   45954 pod_ready.go:81] duration metric: took 6.320818ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692466   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698065   45954 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.698088   45954 pod_ready.go:81] duration metric: took 5.613243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698100   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703688   45954 pod_ready.go:92] pod "kube-proxy-j2qmv" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.703706   45954 pod_ready.go:81] duration metric: took 5.599421ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703718   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708487   45954 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.708505   45954 pod_ready.go:81] duration metric: took 4.779322ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708516   45954 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:05.993620   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:07.475579   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.475617   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:07.475631   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:07.531335   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.531366   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:08.032057   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.039350   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.039384   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:08.531559   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.538857   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.538891   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:09.031899   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:09.037891   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:47:09.047398   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:47:09.047426   46412 api_server.go:131] duration metric: took 5.880732639s to wait for apiserver health ...
	I0914 22:47:09.047434   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:47:09.047440   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:09.049137   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:05.366070   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.366812   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.366844   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.366740   47373 retry.go:31] will retry after 471.857141ms: waiting for machine to come up
	I0914 22:47:05.840519   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.841198   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.841229   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.841150   47373 retry.go:31] will retry after 632.189193ms: waiting for machine to come up
	I0914 22:47:06.475175   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:06.475769   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:06.475800   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:06.475704   47373 retry.go:31] will retry after 866.407813ms: waiting for machine to come up
	I0914 22:47:07.344343   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:07.344865   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:07.344897   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:07.344815   47373 retry.go:31] will retry after 1.101301607s: waiting for machine to come up
	I0914 22:47:08.448452   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:08.449070   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:08.449111   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:08.449014   47373 retry.go:31] will retry after 995.314765ms: waiting for machine to come up
	I0914 22:47:09.446294   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:09.446708   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:09.446740   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:09.446653   47373 retry.go:31] will retry after 1.180552008s: waiting for machine to come up
	I0914 22:47:05.984485   46713 crio.go:444] Took 1.536109 seconds to copy over tarball
	I0914 22:47:05.984562   46713 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:47:09.247825   46713 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.263230608s)
	I0914 22:47:09.247858   46713 crio.go:451] Took 3.263345 seconds to extract the tarball
	I0914 22:47:09.247871   46713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:47:09.289821   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:09.340429   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:09.340463   46713 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:09.340544   46713 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.340568   46713 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0914 22:47:09.340535   46713 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.340531   46713 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.340789   46713 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.340811   46713 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.340886   46713 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.340793   46713 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.342655   46713 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.342658   46713 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.342636   46713 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.342635   46713 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.342793   46713 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.561063   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0914 22:47:09.564079   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.564246   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.564957   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.566014   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.571757   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.578469   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.687502   46713 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0914 22:47:09.687548   46713 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0914 22:47:09.687591   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.727036   46713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0914 22:47:09.727085   46713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.727140   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0914 22:47:09.737952   46713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0914 22:47:09.737986   46713 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.737990   46713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0914 22:47:09.738002   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738013   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738023   46713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.738063   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.744728   46713 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0914 22:47:09.744768   46713 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.744813   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753014   46713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0914 22:47:09.753055   46713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.753080   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753104   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.753056   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0914 22:47:09.753149   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.753193   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.753213   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.758372   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.758544   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.875271   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0914 22:47:09.875299   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0914 22:47:09.875357   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0914 22:47:09.875382   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.875404   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0914 22:47:09.876393   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0914 22:47:09.878339   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0914 22:47:09.878491   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0914 22:47:09.881457   46713 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0914 22:47:09.881475   46713 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.881521   46713 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0914 22:47:08.496805   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.993044   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:09.050966   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:09.061912   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:09.096783   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:09.111938   46412 system_pods.go:59] 8 kube-system pods found
	I0914 22:47:09.111976   46412 system_pods.go:61] "coredns-5dd5756b68-zrd8r" [5b5f18a0-d6ee-42f2-b31a-4f8555b50388] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:09.111988   46412 system_pods.go:61] "etcd-embed-certs-588699" [b32d61b5-8c3f-4980-9f0f-c08630be9c36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:47:09.112001   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [58ac976e-7a8c-4aee-9ee5-b92bd7e897b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:47:09.112015   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [3f9587f5-fe32-446a-a4c9-cb679b177937] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:47:09.112036   46412 system_pods.go:61] "kube-proxy-l8pq9" [4aecae33-dcd9-4ec6-a537-ecbb076c44d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:47:09.112052   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [f23ab185-f4c2-4e39-936d-51d51538b0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:47:09.112066   46412 system_pods.go:61] "metrics-server-57f55c9bc5-zvk82" [3c48277c-4604-4a83-82ea-2776cf0d0537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:47:09.112077   46412 system_pods.go:61] "storage-provisioner" [f0acbbe1-c326-4863-ae2e-d2d3e5be07c1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:47:09.112090   46412 system_pods.go:74] duration metric: took 15.280254ms to wait for pod list to return data ...
	I0914 22:47:09.112103   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:09.119686   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:09.119725   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:09.119747   46412 node_conditions.go:105] duration metric: took 7.637688ms to run NodePressure ...
	I0914 22:47:09.119768   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:09.407351   46412 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414338   46412 kubeadm.go:787] kubelet initialised
	I0914 22:47:09.414361   46412 kubeadm.go:788] duration metric: took 6.974234ms waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414369   46412 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:47:09.424482   46412 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:12.171133   46412 pod_ready.go:102] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.628919   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:10.629418   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:10.629449   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:10.629366   47373 retry.go:31] will retry after 1.486310454s: waiting for machine to come up
	I0914 22:47:12.117762   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:12.118350   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:12.118381   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:12.118295   47373 retry.go:31] will retry after 2.678402115s: waiting for machine to come up
	I0914 22:47:14.798599   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:14.799127   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:14.799160   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:14.799060   47373 retry.go:31] will retry after 2.724185493s: waiting for machine to come up
	I0914 22:47:10.647242   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:12.244764   46713 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.363213143s)
	I0914 22:47:12.244798   46713 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0914 22:47:12.244823   46713 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.013457524s)
	I0914 22:47:12.244888   46713 cache_images.go:92] LoadImages completed in 2.904411161s
	W0914 22:47:12.244978   46713 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0914 22:47:12.245070   46713 ssh_runner.go:195] Run: crio config
	I0914 22:47:12.328636   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:12.328663   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:12.328687   46713 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:12.328710   46713 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-930717 NodeName:old-k8s-version-930717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 22:47:12.328882   46713 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-930717"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-930717
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:12.328984   46713 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-930717 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:12.329062   46713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0914 22:47:12.339084   46713 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:12.339169   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:12.348354   46713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 22:47:12.369083   46713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:12.388242   46713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0914 22:47:12.407261   46713 ssh_runner.go:195] Run: grep 192.168.72.70	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:12.411055   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:12.425034   46713 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717 for IP: 192.168.72.70
	I0914 22:47:12.425070   46713 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:12.425236   46713 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:12.425283   46713 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:12.425372   46713 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.key
	I0914 22:47:12.425451   46713 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key.382dacf3
	I0914 22:47:12.425512   46713 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key
	I0914 22:47:12.425642   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:12.425671   46713 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:12.425685   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:12.425708   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:12.425732   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:12.425751   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:12.425789   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:12.426339   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:12.456306   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:47:12.486038   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:12.520941   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:47:12.552007   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:12.589620   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:12.619358   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:12.650395   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:12.678898   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:12.704668   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:12.730499   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:12.755286   46713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:12.773801   46713 ssh_runner.go:195] Run: openssl version
	I0914 22:47:12.781147   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:12.793953   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799864   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799922   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.806881   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:12.817936   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:12.830758   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836538   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836613   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.843368   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:12.855592   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:12.866207   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871317   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871368   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.878438   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:12.891012   46713 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:12.895887   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:12.902284   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:12.909482   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:12.916524   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:12.924045   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:12.929935   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:12.937292   46713 kubeadm.go:404] StartCluster: {Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:12.937417   46713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:12.937470   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:12.975807   46713 cri.go:89] found id: ""
	I0914 22:47:12.975902   46713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:12.988356   46713 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:12.988379   46713 kubeadm.go:636] restartCluster start
	I0914 22:47:12.988434   46713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:13.000294   46713 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.001492   46713 kubeconfig.go:92] found "old-k8s-version-930717" server: "https://192.168.72.70:8443"
	I0914 22:47:13.008583   46713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:13.023004   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.023065   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.037604   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.037625   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.037671   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.048939   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.549653   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.549746   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.561983   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.049481   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.049588   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.064694   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.549101   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.549195   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.564858   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:15.049112   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.049206   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.063428   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:12.993654   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:14.995358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:13.946979   46412 pod_ready.go:92] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:13.947004   46412 pod_ready.go:81] duration metric: took 4.522495708s waiting for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:13.947013   46412 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:15.968061   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:18.465595   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:17.526472   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:17.526915   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:17.526946   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:17.526867   47373 retry.go:31] will retry after 3.587907236s: waiting for machine to come up
	I0914 22:47:15.549179   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.549273   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.561977   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.049593   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.049678   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.063654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.549178   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.549248   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.561922   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.049041   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.049131   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.062442   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.550005   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.550066   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.561254   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.049855   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.049932   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.062226   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.549845   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.549941   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.561219   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.049739   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.049829   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.061225   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.550035   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.550112   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.561546   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:20.049979   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.050080   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.061478   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.489830   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:19.490802   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.490931   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.118871   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119369   45407 main.go:141] libmachine: (no-preload-344363) Found IP for machine: 192.168.39.60
	I0914 22:47:21.119391   45407 main.go:141] libmachine: (no-preload-344363) Reserving static IP address...
	I0914 22:47:21.119418   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has current primary IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119860   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.119888   45407 main.go:141] libmachine: (no-preload-344363) Reserved static IP address: 192.168.39.60
	I0914 22:47:21.119906   45407 main.go:141] libmachine: (no-preload-344363) DBG | skip adding static IP to network mk-no-preload-344363 - found existing host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"}
	I0914 22:47:21.119931   45407 main.go:141] libmachine: (no-preload-344363) DBG | Getting to WaitForSSH function...
	I0914 22:47:21.119949   45407 main.go:141] libmachine: (no-preload-344363) Waiting for SSH to be available...
	I0914 22:47:21.121965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122282   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.122312   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122392   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH client type: external
	I0914 22:47:21.122429   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa (-rw-------)
	I0914 22:47:21.122482   45407 main.go:141] libmachine: (no-preload-344363) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:21.122510   45407 main.go:141] libmachine: (no-preload-344363) DBG | About to run SSH command:
	I0914 22:47:21.122521   45407 main.go:141] libmachine: (no-preload-344363) DBG | exit 0
	I0914 22:47:21.206981   45407 main.go:141] libmachine: (no-preload-344363) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:21.207366   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetConfigRaw
	I0914 22:47:21.208066   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.210323   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210607   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.210639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210795   45407 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/config.json ...
	I0914 22:47:21.211016   45407 machine.go:88] provisioning docker machine ...
	I0914 22:47:21.211036   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:21.211258   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211431   45407 buildroot.go:166] provisioning hostname "no-preload-344363"
	I0914 22:47:21.211455   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211629   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.213574   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.213887   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.213921   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.214015   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.214181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214338   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.214648   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.215041   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.215056   45407 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-344363 && echo "no-preload-344363" | sudo tee /etc/hostname
	I0914 22:47:21.347323   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-344363
	
	I0914 22:47:21.347358   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.350445   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.350846   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.350882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.351144   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.351393   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351599   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351766   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.351944   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.352264   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.352291   45407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-344363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-344363/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-344363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:21.471619   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:21.471648   45407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:21.471671   45407 buildroot.go:174] setting up certificates
	I0914 22:47:21.471683   45407 provision.go:83] configureAuth start
	I0914 22:47:21.471696   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.472019   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.474639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475113   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.475141   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475293   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.477627   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.477976   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.478009   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.478148   45407 provision.go:138] copyHostCerts
	I0914 22:47:21.478189   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:21.478198   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:21.478249   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:21.478336   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:21.478344   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:21.478362   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:21.478416   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:21.478423   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:21.478439   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:21.478482   45407 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.no-preload-344363 san=[192.168.39.60 192.168.39.60 localhost 127.0.0.1 minikube no-preload-344363]
	I0914 22:47:21.546956   45407 provision.go:172] copyRemoteCerts
	I0914 22:47:21.547006   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:21.547029   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.549773   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550217   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.550257   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550468   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.550683   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.550850   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.551050   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:21.635939   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:21.656944   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:47:21.679064   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:21.701127   45407 provision.go:86] duration metric: configureAuth took 229.434247ms
	I0914 22:47:21.701147   45407 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:21.701319   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:47:21.701381   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.704100   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704475   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.704512   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704672   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.704865   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705046   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705218   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.705382   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.705828   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.705849   45407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:22.037291   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:22.037337   45407 machine.go:91] provisioned docker machine in 826.295956ms
	I0914 22:47:22.037350   45407 start.go:300] post-start starting for "no-preload-344363" (driver="kvm2")
	I0914 22:47:22.037363   45407 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:22.037396   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.037704   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:22.037729   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.040372   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040729   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.040757   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040896   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.041082   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.041266   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.041373   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.129612   45407 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:22.133522   45407 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:22.133550   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:22.133625   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:22.133715   45407 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:22.133844   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:22.142411   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:22.165470   45407 start.go:303] post-start completed in 128.106418ms
	I0914 22:47:22.165496   45407 fix.go:56] fixHost completed within 19.252903923s
	I0914 22:47:22.165524   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.168403   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168696   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.168731   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168894   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.169095   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169248   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169384   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.169571   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:22.169891   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:22.169904   45407 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:22.284038   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731642.258576336
	
	I0914 22:47:22.284062   45407 fix.go:206] guest clock: 1694731642.258576336
	I0914 22:47:22.284071   45407 fix.go:219] Guest: 2023-09-14 22:47:22.258576336 +0000 UTC Remote: 2023-09-14 22:47:22.16550191 +0000 UTC m=+357.203571663 (delta=93.074426ms)
	I0914 22:47:22.284107   45407 fix.go:190] guest clock delta is within tolerance: 93.074426ms
	I0914 22:47:22.284117   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 19.371563772s
	I0914 22:47:22.284146   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.284388   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:22.286809   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287091   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.287133   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287288   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287782   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287978   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.288050   45407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:22.288085   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.288176   45407 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:22.288197   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.290608   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.290936   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.290965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291067   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291157   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291345   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291516   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.291529   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.291554   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291649   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.291706   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291837   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291975   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.292158   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.417570   45407 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:22.423145   45407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:22.563752   45407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:22.569625   45407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:22.569718   45407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:22.585504   45407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:22.585527   45407 start.go:469] detecting cgroup driver to use...
	I0914 22:47:22.585610   45407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:22.599600   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:22.612039   45407 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:22.612080   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:22.624817   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:22.637141   45407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:22.744181   45407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:22.864420   45407 docker.go:212] disabling docker service ...
	I0914 22:47:22.864490   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:22.877360   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:22.888786   45407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:23.000914   45407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:23.137575   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:23.150682   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:23.167898   45407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:47:23.167966   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.176916   45407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:23.176991   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.185751   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.195260   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.204852   45407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:23.214303   45407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:23.222654   45407 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:23.222717   45407 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:23.235654   45407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:23.244081   45407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:23.357943   45407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:23.521315   45407 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:23.521410   45407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:23.526834   45407 start.go:537] Will wait 60s for crictl version
	I0914 22:47:23.526889   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:23.530250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:23.562270   45407 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:23.562358   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.606666   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.658460   45407 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:47:20.467600   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:20.964310   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.964331   46412 pod_ready.go:81] duration metric: took 7.017312906s waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.964349   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968539   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.968555   46412 pod_ready.go:81] duration metric: took 4.200242ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968563   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973180   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.973194   46412 pod_ready.go:81] duration metric: took 4.625123ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973206   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977403   46412 pod_ready.go:92] pod "kube-proxy-l8pq9" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.977418   46412 pod_ready.go:81] duration metric: took 4.206831ms waiting for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977425   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375236   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:22.375259   46412 pod_ready.go:81] duration metric: took 1.397826525s waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375271   46412 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:23.659885   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:23.662745   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663195   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:23.663228   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663452   45407 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:23.667637   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:23.678881   45407 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:47:23.678929   45407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:23.708267   45407 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:47:23.708309   45407 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:23.708390   45407 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.708421   45407 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.708424   45407 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0914 22:47:23.708437   45407 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.708425   45407 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.708537   45407 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.708403   45407 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.708393   45407 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.709903   45407 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.709887   45407 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.709899   45407 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.710189   45407 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.710260   45407 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0914 22:47:23.710346   45407 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.917134   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.929080   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.929396   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0914 22:47:23.935684   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.936236   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.937239   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.937622   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.006429   45407 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0914 22:47:24.006479   45407 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.006524   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.102547   45407 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0914 22:47:24.102597   45407 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.102641   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201012   45407 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0914 22:47:24.201050   45407 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.201100   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201106   45407 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0914 22:47:24.201138   45407 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.201156   45407 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0914 22:47:24.201203   45407 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.201227   45407 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0914 22:47:24.201282   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.201294   45407 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.201329   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201236   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201180   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.206295   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.263389   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0914 22:47:24.263451   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.263501   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.263513   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:24.263534   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0914 22:47:24.263573   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.263665   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.273844   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0914 22:47:24.273932   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:24.338823   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0914 22:47:24.338944   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:24.344560   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0914 22:47:24.344580   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0914 22:47:24.344594   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344635   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344659   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:24.344678   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0914 22:47:24.344723   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0914 22:47:24.344745   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0914 22:47:24.344816   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:24.346975   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0914 22:47:24.953835   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:20.549479   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.549585   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.563121   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.049732   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.049807   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.061447   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.549012   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.549073   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.561653   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.049517   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.049582   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.062280   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.549943   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.550017   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.562654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:23.024019   46713 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:23.024043   46713 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:23.024054   46713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:23.024101   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:23.060059   46713 cri.go:89] found id: ""
	I0914 22:47:23.060116   46713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:23.078480   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:23.087665   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:23.087714   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096513   46713 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096535   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:23.205072   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.081881   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.285041   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.364758   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.468127   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:24.468201   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:24.483354   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.007133   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.507231   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:23.992945   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.492600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:24.475872   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.978889   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.317110   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.97244294s)
	I0914 22:47:26.317145   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0914 22:47:26.317167   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317174   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.972489589s)
	I0914 22:47:26.317202   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0914 22:47:26.317215   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317248   45407 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.363386448s)
	I0914 22:47:26.317281   45407 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 22:47:26.317319   45407 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.317366   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:26.317213   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.972376756s)
	I0914 22:47:26.317426   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0914 22:47:28.397989   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (2.080744487s)
	I0914 22:47:28.398021   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0914 22:47:28.398031   45407 ssh_runner.go:235] Completed: which crictl: (2.080647539s)
	I0914 22:47:28.398048   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398093   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398095   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.006554   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:26.032232   46713 api_server.go:72] duration metric: took 1.564104415s to wait for apiserver process to appear ...
	I0914 22:47:26.032255   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:26.032270   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:28.992292   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.490442   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.033000   46713 api_server.go:269] stopped: https://192.168.72.70:8443/healthz: Get "https://192.168.72.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 22:47:31.033044   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:31.568908   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:31.568937   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:32.069915   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.080424   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.080456   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:32.570110   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.580879   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.580918   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:33.069247   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:33.077664   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:47:33.086933   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:47:33.086960   46713 api_server.go:131] duration metric: took 7.054699415s to wait for apiserver health ...
	I0914 22:47:33.086973   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:33.086981   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:33.088794   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:29.476304   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.975459   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:30.974281   45407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.57612291s)
	I0914 22:47:30.974347   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 22:47:30.974381   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.576263058s)
	I0914 22:47:30.974403   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0914 22:47:30.974427   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:30.974455   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:30.974470   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:33.737309   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.762815322s)
	I0914 22:47:33.737355   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0914 22:47:33.737379   45407 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.737322   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.762844826s)
	I0914 22:47:33.737464   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 22:47:33.737436   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.090357   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:33.103371   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:33.123072   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:33.133238   46713 system_pods.go:59] 7 kube-system pods found
	I0914 22:47:33.133268   46713 system_pods.go:61] "coredns-5644d7b6d9-8sbjk" [638464d2-96db-460d-bf82-0ee79df816da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:33.133278   46713 system_pods.go:61] "etcd-old-k8s-version-930717" [4b38f48a-fc4a-43d5-a2b4-414aff712c1b] Running
	I0914 22:47:33.133286   46713 system_pods.go:61] "kube-apiserver-old-k8s-version-930717" [523a3adc-8c68-4980-8a53-133476ce2488] Running
	I0914 22:47:33.133294   46713 system_pods.go:61] "kube-controller-manager-old-k8s-version-930717" [36fd7e01-4a5d-446f-8370-f7a7e886571c] Running
	I0914 22:47:33.133306   46713 system_pods.go:61] "kube-proxy-l4qz4" [c61d0471-0a9e-4662-b723-39944c8b3c31] Running
	I0914 22:47:33.133314   46713 system_pods.go:61] "kube-scheduler-old-k8s-version-930717" [f6d45807-c7f2-4545-b732-45dbd945c660] Running
	I0914 22:47:33.133323   46713 system_pods.go:61] "storage-provisioner" [2956bea1-80f8-4f61-a635-4332d4e3042e] Running
	I0914 22:47:33.133331   46713 system_pods.go:74] duration metric: took 10.233824ms to wait for pod list to return data ...
	I0914 22:47:33.133343   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:33.137733   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:33.137765   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:33.137776   46713 node_conditions.go:105] duration metric: took 4.42667ms to run NodePressure ...
	I0914 22:47:33.137795   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:33.590921   46713 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:33.597720   46713 retry.go:31] will retry after 159.399424ms: kubelet not initialised
	I0914 22:47:33.767747   46713 retry.go:31] will retry after 191.717885ms: kubelet not initialised
	I0914 22:47:33.967120   46713 retry.go:31] will retry after 382.121852ms: kubelet not initialised
	I0914 22:47:34.354106   46713 retry.go:31] will retry after 1.055800568s: kubelet not initialised
	I0914 22:47:35.413704   46713 retry.go:31] will retry after 1.341728619s: kubelet not initialised
	I0914 22:47:33.993188   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.491280   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:34.475254   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.977175   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.760804   46713 retry.go:31] will retry after 2.668611083s: kubelet not initialised
	I0914 22:47:39.434688   46713 retry.go:31] will retry after 2.1019007s: kubelet not initialised
	I0914 22:47:38.994051   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.490913   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:38.998980   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.474686   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:40.530763   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.793268381s)
	I0914 22:47:40.530793   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0914 22:47:40.530820   45407 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:40.530881   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:41.888277   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.357355595s)
	I0914 22:47:41.888305   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0914 22:47:41.888338   45407 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:41.888405   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:42.537191   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 22:47:42.537244   45407 cache_images.go:123] Successfully loaded all cached images
	I0914 22:47:42.537251   45407 cache_images.go:92] LoadImages completed in 18.828927203s
	I0914 22:47:42.537344   45407 ssh_runner.go:195] Run: crio config
	I0914 22:47:42.594035   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:47:42.594056   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:42.594075   45407 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:42.594098   45407 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-344363 NodeName:no-preload-344363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:47:42.594272   45407 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-344363"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:42.594383   45407 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-344363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:42.594449   45407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:47:42.604172   45407 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:42.604243   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:42.612570   45407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0914 22:47:42.628203   45407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:42.643625   45407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0914 22:47:42.658843   45407 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:42.661922   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:42.672252   45407 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363 for IP: 192.168.39.60
	I0914 22:47:42.672279   45407 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:42.672420   45407 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:42.672462   45407 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:42.672536   45407 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.key
	I0914 22:47:42.672630   45407 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key.a014e791
	I0914 22:47:42.672693   45407 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key
	I0914 22:47:42.672828   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:42.672867   45407 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:42.672879   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:42.672915   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:42.672948   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:42.672982   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:42.673044   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:42.673593   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:42.695080   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:47:42.716844   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:42.746475   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0914 22:47:42.769289   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:42.790650   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:42.811665   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:42.833241   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:42.853851   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:42.875270   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:42.896913   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:42.917370   45407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:42.934549   45407 ssh_runner.go:195] Run: openssl version
	I0914 22:47:42.939762   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:42.949829   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954155   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954204   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.959317   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:42.968463   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:42.979023   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983436   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983502   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.988655   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:42.998288   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:43.007767   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011865   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011940   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.016837   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:43.026372   45407 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:43.030622   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:43.036026   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:43.041394   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:43.046608   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:43.051675   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:43.056621   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:43.061552   45407 kubeadm.go:404] StartCluster: {Name:no-preload-344363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:43.061645   45407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:43.061700   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:43.090894   45407 cri.go:89] found id: ""
	I0914 22:47:43.090957   45407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:43.100715   45407 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:43.100732   45407 kubeadm.go:636] restartCluster start
	I0914 22:47:43.100782   45407 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:43.109233   45407 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.110217   45407 kubeconfig.go:92] found "no-preload-344363" server: "https://192.168.39.60:8443"
	I0914 22:47:43.112442   45407 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:43.120580   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.120619   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.131224   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.131238   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.131292   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.140990   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.641661   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.641753   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.653379   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.142002   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.142077   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.154194   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.641806   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.641931   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.653795   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:41.541334   46713 retry.go:31] will retry after 2.553142131s: kubelet not initialised
	I0914 22:47:44.100647   46713 retry.go:31] will retry after 6.538244211s: kubelet not initialised
	I0914 22:47:43.995757   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.490438   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:43.974300   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.474137   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:45.141728   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.141816   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.153503   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:45.641693   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.641775   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.653204   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.141748   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.141838   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.153035   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.641294   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.641386   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.653144   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.141813   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.141915   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.152408   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.641793   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.641872   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.653228   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.141212   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.141304   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.152568   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.641805   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.641881   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.652184   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.141839   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.141909   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.152921   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.642082   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.642160   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.656837   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.991209   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:51.492672   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:48.973567   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.974964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:52.975525   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.141324   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.141399   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.153003   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:50.642032   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.642113   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.653830   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.141403   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.141486   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.152324   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.641932   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.642027   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.653279   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.141928   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.141998   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.152653   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.641151   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.641239   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.652312   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:53.121389   45407 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:53.121422   45407 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:53.121436   45407 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:53.121511   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:53.150615   45407 cri.go:89] found id: ""
	I0914 22:47:53.150681   45407 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:53.164511   45407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:53.173713   45407 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:53.173778   45407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183776   45407 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183797   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:53.310974   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.230246   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.409237   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.474183   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.572433   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:54.572581   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:54.584938   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:50.644922   46713 retry.go:31] will retry after 11.248631638s: kubelet not initialised
	I0914 22:47:53.990630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.990661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.475037   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:57.475941   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.098638   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:55.599218   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.099188   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.598826   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.621701   45407 api_server.go:72] duration metric: took 2.049267478s to wait for apiserver process to appear ...
	I0914 22:47:56.621729   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:56.621749   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622263   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:56.622301   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622682   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:57.123404   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.433050   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:48:00.433082   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:48:00.433096   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.467030   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.467073   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:00.623319   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.633882   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.633912   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.123559   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.128661   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.128691   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.623201   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.629775   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.629804   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:02.123439   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:02.131052   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:48:02.141185   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:48:02.141213   45407 api_server.go:131] duration metric: took 5.519473898s to wait for apiserver health ...
	I0914 22:48:02.141222   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:48:02.141228   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:48:02.143254   45407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:57.992038   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:59.992600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:02.144756   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:48:02.158230   45407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:48:02.182382   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:48:02.204733   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:48:02.204786   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:48:02.204801   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:48:02.204817   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:48:02.204834   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:48:02.204847   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:48:02.204859   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:48:02.204876   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:48:02.204887   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:48:02.204900   45407 system_pods.go:74] duration metric: took 22.491699ms to wait for pod list to return data ...
	I0914 22:48:02.204913   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:48:02.208661   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:48:02.208692   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:48:02.208706   45407 node_conditions.go:105] duration metric: took 3.7844ms to run NodePressure ...
	I0914 22:48:02.208731   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:48:02.454257   45407 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458848   45407 kubeadm.go:787] kubelet initialised
	I0914 22:48:02.458868   45407 kubeadm.go:788] duration metric: took 4.585034ms waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458874   45407 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:02.464634   45407 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.471350   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471371   45407 pod_ready.go:81] duration metric: took 6.714087ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.471379   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471387   45407 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.476977   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.476998   45407 pod_ready.go:81] duration metric: took 5.604627ms waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.477009   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.477019   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.483218   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483236   45407 pod_ready.go:81] duration metric: took 6.211697ms waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.483244   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483256   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.589184   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589217   45407 pod_ready.go:81] duration metric: took 105.950074ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.589227   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589236   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.987051   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987081   45407 pod_ready.go:81] duration metric: took 397.836385ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.987094   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987103   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.392835   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392865   45407 pod_ready.go:81] duration metric: took 405.754351ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.392876   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392886   45407 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.786615   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786641   45407 pod_ready.go:81] duration metric: took 393.746366ms waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.786652   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786660   45407 pod_ready.go:38] duration metric: took 1.327778716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:03.786676   45407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:48:03.798081   45407 ops.go:34] apiserver oom_adj: -16
	I0914 22:48:03.798101   45407 kubeadm.go:640] restartCluster took 20.697363165s
	I0914 22:48:03.798107   45407 kubeadm.go:406] StartCluster complete in 20.736562339s
	I0914 22:48:03.798121   45407 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.798193   45407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:48:03.799954   45407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.800200   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:48:03.800299   45407 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:48:03.800368   45407 addons.go:69] Setting storage-provisioner=true in profile "no-preload-344363"
	I0914 22:48:03.800449   45407 addons.go:231] Setting addon storage-provisioner=true in "no-preload-344363"
	W0914 22:48:03.800462   45407 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:48:03.800511   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800394   45407 addons.go:69] Setting metrics-server=true in profile "no-preload-344363"
	I0914 22:48:03.800543   45407 addons.go:231] Setting addon metrics-server=true in "no-preload-344363"
	W0914 22:48:03.800558   45407 addons.go:240] addon metrics-server should already be in state true
	I0914 22:48:03.800590   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800388   45407 addons.go:69] Setting default-storageclass=true in profile "no-preload-344363"
	I0914 22:48:03.800633   45407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-344363"
	I0914 22:48:03.800411   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:48:03.800906   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800909   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800944   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.801011   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.801054   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.800968   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.804911   45407 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-344363" context rescaled to 1 replicas
	I0914 22:48:03.804946   45407 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:48:03.807503   45407 out.go:177] * Verifying Kubernetes components...
	I0914 22:47:59.973913   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:01.974625   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:03.808768   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:48:03.816774   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0914 22:48:03.816773   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0914 22:48:03.817265   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817518   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817791   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.817821   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818011   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.818032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818223   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818407   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818431   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.818976   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.819027   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.829592   45407 addons.go:231] Setting addon default-storageclass=true in "no-preload-344363"
	W0914 22:48:03.829614   45407 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:48:03.829641   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.830013   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.830047   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.835514   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0914 22:48:03.835935   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.836447   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.836473   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.836841   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.837011   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.838909   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.843677   45407 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:48:03.845231   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:48:03.845246   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:48:03.845261   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.844291   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I0914 22:48:03.845685   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.846224   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.846242   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.846572   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.847073   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.847103   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.847332   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0914 22:48:03.848400   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.848666   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849160   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.849182   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.849263   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.849283   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849314   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.849461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.849570   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.849635   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.849682   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.850555   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.850585   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.863035   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0914 22:48:03.863559   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864010   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.864204   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0914 22:48:03.864478   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.864526   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864752   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.864936   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864955   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.865261   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.865489   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.866474   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.868300   45407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:48:03.867504   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.869841   45407 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:03.869855   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:48:03.869874   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.870067   45407 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:03.870078   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:48:03.870091   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.873462   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.873859   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.873882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874026   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874114   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.874287   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.874397   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.874903   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874949   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.874980   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.875135   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.875301   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.875486   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.956934   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:48:03.956956   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:48:03.973872   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:48:03.973896   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:48:04.002028   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.002051   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:48:04.018279   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:04.037990   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:04.047125   45407 node_ready.go:35] waiting up to 6m0s for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:04.047292   45407 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:48:04.086299   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.991926   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.991952   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992225   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992292   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992324   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992342   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992364   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992614   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992634   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992649   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992657   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992665   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992914   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992933   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:01.898769   46713 retry.go:31] will retry after 9.475485234s: kubelet not initialised
	I0914 22:48:05.528027   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.490009157s)
	I0914 22:48:05.528078   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528087   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528435   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528457   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528470   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528436   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.528481   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528802   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528824   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528829   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.600274   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.51392997s)
	I0914 22:48:05.600338   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600351   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.600645   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.600670   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.600682   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600695   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.602502   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.602513   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.602524   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.602546   45407 addons.go:467] Verifying addon metrics-server=true in "no-preload-344363"
	I0914 22:48:05.604330   45407 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 22:48:02.491577   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.995014   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.474529   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:06.474964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:05.605648   45407 addons.go:502] enable addons completed in 1.805353931s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 22:48:06.198114   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:08.199023   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:07.490770   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:09.991693   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:08.974469   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:11.474711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:10.698198   45407 node_ready.go:49] node "no-preload-344363" has status "Ready":"True"
	I0914 22:48:10.698218   45407 node_ready.go:38] duration metric: took 6.651066752s waiting for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:10.698227   45407 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:10.704694   45407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710103   45407 pod_ready.go:92] pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:10.710119   45407 pod_ready.go:81] duration metric: took 5.400404ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710128   45407 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.747445   45407 pod_ready.go:102] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.229927   45407 pod_ready.go:92] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:13.229953   45407 pod_ready.go:81] duration metric: took 2.519818297s waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:13.229966   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747126   45407 pod_ready.go:92] pod "kube-apiserver-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.747147   45407 pod_ready.go:81] duration metric: took 1.51717338s waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747157   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752397   45407 pod_ready.go:92] pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.752413   45407 pod_ready.go:81] duration metric: took 5.250049ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752420   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.380752   46713 kubeadm.go:787] kubelet initialised
	I0914 22:48:11.380783   46713 kubeadm.go:788] duration metric: took 37.789831498s waiting for restarted kubelet to initialise ...
	I0914 22:48:11.380793   46713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:11.386189   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392948   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.392970   46713 pod_ready.go:81] duration metric: took 6.75113ms waiting for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392981   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398606   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.398627   46713 pod_ready.go:81] duration metric: took 5.638835ms waiting for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398639   46713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404145   46713 pod_ready.go:92] pod "etcd-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.404174   46713 pod_ready.go:81] duration metric: took 5.527173ms waiting for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404187   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409428   46713 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.409448   46713 pod_ready.go:81] duration metric: took 5.252278ms waiting for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409461   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779225   46713 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.779252   46713 pod_ready.go:81] duration metric: took 369.782336ms waiting for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779267   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179256   46713 pod_ready.go:92] pod "kube-proxy-l4qz4" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.179277   46713 pod_ready.go:81] duration metric: took 400.003039ms waiting for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179286   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578889   46713 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.578921   46713 pod_ready.go:81] duration metric: took 399.627203ms waiting for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578935   46713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:12.491274   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:14.991146   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.991799   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.974725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.473917   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.474722   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:15.099588   45407 pod_ready.go:92] pod "kube-proxy-zzkbp" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.099612   45407 pod_ready.go:81] duration metric: took 347.18498ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.099623   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498642   45407 pod_ready.go:92] pod "kube-scheduler-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.498664   45407 pod_ready.go:81] duration metric: took 399.034277ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498678   45407 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:17.806138   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.887157   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:19.390361   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.991911   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.993133   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.474578   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.305450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:22.305521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:24.306131   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:21.885143   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.886722   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.490126   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.991185   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.974547   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.473850   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.805651   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.306125   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.384992   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.385266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.385877   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:27.991827   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.991995   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.475603   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.974568   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:31.806483   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.306121   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.886341   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.385506   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.488948   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.490950   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.989621   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.474815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.973407   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.806806   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.806988   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.886043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.386865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.991151   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:41.491384   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:39.974109   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.473010   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.808362   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.305126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.886094   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.386710   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.991121   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.992500   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:44.475120   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:46.973837   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.305212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.305740   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.806334   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.886380   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.887578   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:48.490416   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:50.990196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.474209   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.474657   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.808853   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.305742   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.888488   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.385591   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:52.990333   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.991549   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:53.974301   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:55.976250   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.474372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.807759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.304597   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.885164   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.885809   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:57.491267   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.492043   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.991231   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:00.974064   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:02.975136   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.808275   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.385492   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.385865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:05.386266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.992513   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.490253   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:04.975537   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.473413   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.306066   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.805711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.886495   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.386100   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.995545   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.490960   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:09.476367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.974480   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.807870   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.306759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:12.386166   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.990090   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.489864   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.975102   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.474761   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.475314   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:15.809041   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.305700   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:17.385490   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:19.386201   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.490727   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.493813   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.973383   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.973978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.306906   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.805781   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.806417   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:21.387171   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:23.394663   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.989981   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.998602   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.975048   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.473804   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.805993   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:25.886256   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:28.385307   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:30.386473   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.490860   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.991665   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.992373   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.475815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.973092   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.305648   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.806797   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.886577   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.386203   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.490086   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:36.490465   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:33.973662   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.974041   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.473275   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.306848   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.806295   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.388154   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.886447   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.490850   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.989734   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.473543   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.473711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:41.807197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.305572   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.385788   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.386844   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.995794   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:45.490630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.474251   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.974425   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.306070   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.805530   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.886095   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.888504   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:47.491269   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.990921   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.474354   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.973552   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:50.806526   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.807021   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.385411   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.385825   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.490166   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:54.991982   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.974372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:56.473350   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.305863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.306450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.308315   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.886560   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.886950   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.386043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.490604   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.490811   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.993715   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:58.973152   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.975078   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.474589   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.806409   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.806552   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:02.387458   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.886066   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.490551   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:06.490632   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.974290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.974714   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.810256   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.305443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.386252   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:09.887808   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.490994   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.990417   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.474207   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.973759   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.305662   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.807626   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.385387   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.386055   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.991196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.489856   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.974362   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.474890   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.305348   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.306521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.306661   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:16.386682   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:18.386805   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.491969   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.990884   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.991904   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.476052   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.973290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.806863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.810113   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:20.886118   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.388653   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:24.490861   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.991437   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.474556   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.307894   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.809126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:25.885409   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:27.886080   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.386151   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:29.489358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.491041   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.973725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.975342   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.474590   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.306171   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.307126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:32.386190   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:34.886414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.491383   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.492155   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.974978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:38.473506   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.307221   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.806174   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.386235   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.886579   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.990447   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.991649   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.474117   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.973778   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.308130   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.806411   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.807765   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.385199   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.387102   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.491019   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.993076   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.974689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.473863   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.305509   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.305825   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:46.885280   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.385189   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.491661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.989457   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.991512   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.973709   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.976112   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.306459   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.805441   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.386498   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.887424   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.492074   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.989668   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.473073   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.473689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.474597   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:55.806711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.305434   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.386640   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.885298   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.995348   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:01.491262   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.974371   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.474367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.305803   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.806120   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:04.807184   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.886357   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.887274   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:05.386976   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.708637   45954 pod_ready.go:81] duration metric: took 4m0.000105295s waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:03.708672   45954 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:03.708681   45954 pod_ready.go:38] duration metric: took 4m6.567418041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:03.708699   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:51:03.708739   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:03.708804   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:03.759664   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:03.759688   45954 cri.go:89] found id: ""
	I0914 22:51:03.759697   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:03.759753   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.764736   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:03.764789   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:03.800251   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:03.800280   45954 cri.go:89] found id: ""
	I0914 22:51:03.800290   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:03.800341   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.804761   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:03.804818   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:03.847136   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:03.847162   45954 cri.go:89] found id: ""
	I0914 22:51:03.847172   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:03.847215   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.851253   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:03.851325   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:03.882629   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:03.882654   45954 cri.go:89] found id: ""
	I0914 22:51:03.882664   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:03.882713   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.887586   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:03.887642   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:03.916702   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:03.916723   45954 cri.go:89] found id: ""
	I0914 22:51:03.916730   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:03.916773   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.921172   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:03.921232   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:03.950593   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:03.950618   45954 cri.go:89] found id: ""
	I0914 22:51:03.950628   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:03.950689   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.954303   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:03.954366   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:03.982565   45954 cri.go:89] found id: ""
	I0914 22:51:03.982588   45954 logs.go:284] 0 containers: []
	W0914 22:51:03.982597   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:03.982604   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:03.982662   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:04.011932   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.011957   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:04.011964   45954 cri.go:89] found id: ""
	I0914 22:51:04.011972   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:04.012026   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.016091   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.019830   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:04.019852   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:04.061469   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:04.061494   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:04.092823   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:04.092846   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:04.156150   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:04.156190   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:04.169879   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:04.169920   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:04.226165   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:04.226198   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.255658   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:04.255692   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:04.299368   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:04.299401   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:04.440433   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:04.440467   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:04.477396   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:04.477425   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:04.513399   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:04.513431   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:05.016889   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:05.016925   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:05.067712   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:05.067749   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:05.973423   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.973637   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.307754   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.805419   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.389465   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.885150   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.597529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:51:07.614053   45954 api_server.go:72] duration metric: took 4m15.435815174s to wait for apiserver process to appear ...
	I0914 22:51:07.614076   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:51:07.614106   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:07.614155   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:07.643309   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:07.643333   45954 cri.go:89] found id: ""
	I0914 22:51:07.643342   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:07.643411   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.647434   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:07.647511   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:07.676943   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:07.676959   45954 cri.go:89] found id: ""
	I0914 22:51:07.676966   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:07.677006   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.681053   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:07.681101   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:07.714710   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:07.714736   45954 cri.go:89] found id: ""
	I0914 22:51:07.714745   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:07.714807   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.718900   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:07.718966   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:07.754786   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:07.754808   45954 cri.go:89] found id: ""
	I0914 22:51:07.754815   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:07.754867   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.759623   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:07.759693   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:07.794366   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:07.794389   45954 cri.go:89] found id: ""
	I0914 22:51:07.794398   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:07.794457   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.798717   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:07.798777   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:07.831131   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:07.831158   45954 cri.go:89] found id: ""
	I0914 22:51:07.831167   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:07.831227   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.835696   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:07.835762   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:07.865802   45954 cri.go:89] found id: ""
	I0914 22:51:07.865831   45954 logs.go:284] 0 containers: []
	W0914 22:51:07.865841   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:07.865849   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:07.865905   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:07.895025   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:07.895049   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:07.895056   45954 cri.go:89] found id: ""
	I0914 22:51:07.895064   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:07.895118   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.899230   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.903731   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:07.903751   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:08.033922   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:08.033952   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:08.068784   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:08.068812   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:08.120395   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:08.120428   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:08.133740   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:08.133763   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:08.173288   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:08.173320   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:08.203964   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:08.203988   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:08.732327   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:08.732367   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:08.784110   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:08.784150   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:08.819179   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:08.819210   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:08.866612   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:08.866644   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:08.900892   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:08.900939   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:08.950563   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:08.950593   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:11.505428   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:51:11.511228   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:51:11.512855   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:51:11.512881   45954 api_server.go:131] duration metric: took 3.898798182s to wait for apiserver health ...
	I0914 22:51:11.512891   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:51:11.512911   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:11.512954   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:11.544538   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:11.544563   45954 cri.go:89] found id: ""
	I0914 22:51:11.544573   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:11.544629   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.548878   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:11.548946   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:11.578439   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:11.578464   45954 cri.go:89] found id: ""
	I0914 22:51:11.578473   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:11.578531   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.582515   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:11.582576   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:11.611837   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:11.611857   45954 cri.go:89] found id: ""
	I0914 22:51:11.611863   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:11.611917   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.615685   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:11.615744   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:11.645850   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:11.645869   45954 cri.go:89] found id: ""
	I0914 22:51:11.645876   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:11.645914   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.649995   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:11.650048   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:11.683515   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:11.683541   45954 cri.go:89] found id: ""
	I0914 22:51:11.683550   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:11.683604   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.687715   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:11.687806   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:11.721411   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.721428   45954 cri.go:89] found id: ""
	I0914 22:51:11.721434   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:11.721477   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.725801   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:11.725859   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:11.760391   45954 cri.go:89] found id: ""
	I0914 22:51:11.760417   45954 logs.go:284] 0 containers: []
	W0914 22:51:11.760427   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:11.760437   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:11.760498   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:11.792140   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.792162   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:11.792168   45954 cri.go:89] found id: ""
	I0914 22:51:11.792175   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:11.792234   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.796600   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.800888   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:11.800912   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:11.863075   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:11.863106   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:11.877744   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:11.877775   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.930381   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:11.930418   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.961471   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:11.961497   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:12.005391   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:12.005417   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:12.034742   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:12.034771   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:12.064672   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:12.064702   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:12.095801   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:12.095834   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:12.124224   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:12.124260   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:09.974433   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.975389   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.806380   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.807443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:12.657331   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:12.657375   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:12.718197   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:12.718227   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:12.845353   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:12.845381   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:15.395502   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:51:15.395524   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.395529   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.395534   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.395540   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.395544   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.395548   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.395554   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.395559   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.395565   45954 system_pods.go:74] duration metric: took 3.882669085s to wait for pod list to return data ...
	I0914 22:51:15.395572   45954 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:51:15.398128   45954 default_sa.go:45] found service account: "default"
	I0914 22:51:15.398148   45954 default_sa.go:55] duration metric: took 2.571314ms for default service account to be created ...
	I0914 22:51:15.398155   45954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:51:15.407495   45954 system_pods.go:86] 8 kube-system pods found
	I0914 22:51:15.407517   45954 system_pods.go:89] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.407522   45954 system_pods.go:89] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.407527   45954 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.407532   45954 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.407535   45954 system_pods.go:89] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.407540   45954 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.407549   45954 system_pods.go:89] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.407558   45954 system_pods.go:89] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.407576   45954 system_pods.go:126] duration metric: took 9.409452ms to wait for k8s-apps to be running ...
	I0914 22:51:15.407587   45954 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:51:15.407633   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:15.424728   45954 system_svc.go:56] duration metric: took 17.122868ms WaitForService to wait for kubelet.
	I0914 22:51:15.424754   45954 kubeadm.go:581] duration metric: took 4m23.246518879s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:51:15.424794   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:51:15.428492   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:51:15.428520   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:51:15.428534   45954 node_conditions.go:105] duration metric: took 3.733977ms to run NodePressure ...
	I0914 22:51:15.428550   45954 start.go:228] waiting for startup goroutines ...
	I0914 22:51:15.428563   45954 start.go:233] waiting for cluster config update ...
	I0914 22:51:15.428576   45954 start.go:242] writing updated cluster config ...
	I0914 22:51:15.428887   45954 ssh_runner.go:195] Run: rm -f paused
	I0914 22:51:15.479711   45954 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:51:15.482387   45954 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799144" cluster and "default" namespace by default
	I0914 22:51:11.885968   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.887391   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:14.474188   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.974056   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.306146   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.806037   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.386306   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.386406   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:19.474446   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:21.474860   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.375841   46412 pod_ready.go:81] duration metric: took 4m0.000552226s waiting for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:22.375872   46412 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:22.375890   46412 pod_ready.go:38] duration metric: took 4m12.961510371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:22.375915   46412 kubeadm.go:640] restartCluster took 4m33.075347594s
	W0914 22:51:22.375983   46412 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:51:22.376022   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:51:20.806249   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.807141   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:24.809235   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:20.888098   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:23.386482   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:25.386542   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.305114   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:29.306240   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.886695   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:30.385740   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:31.306508   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:33.306655   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:32.886111   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.384925   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.805992   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:38.307801   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:37.385344   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:39.888303   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:40.806212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:43.305815   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:42.388414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:44.388718   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:45.306197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:47.806983   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:49.807150   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:46.885737   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:48.885794   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.115476   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.73941793s)
	I0914 22:51:53.115549   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:53.128821   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:51:53.137267   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:51:53.145533   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:51:53.145569   46412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 22:51:53.202279   46412 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 22:51:53.202501   46412 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:51:53.353512   46412 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:51:53.353674   46412 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:51:53.353816   46412 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:51:53.513428   46412 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:51:53.515450   46412 out.go:204]   - Generating certificates and keys ...
	I0914 22:51:53.515574   46412 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:51:53.515672   46412 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:51:53.515785   46412 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:51:53.515896   46412 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:51:53.516234   46412 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:51:53.516841   46412 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:51:53.517488   46412 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:51:53.517974   46412 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:51:53.518563   46412 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:51:53.519109   46412 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:51:53.519728   46412 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:51:53.519809   46412 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:51:53.641517   46412 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:51:53.842920   46412 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:51:53.982500   46412 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:51:54.065181   46412 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:51:54.065678   46412 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:51:54.071437   46412 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:51:52.305643   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.305996   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:51.386246   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.386956   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.073206   46412 out.go:204]   - Booting up control plane ...
	I0914 22:51:54.073363   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:51:54.073470   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:51:54.073554   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:51:54.091178   46412 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:51:54.091289   46412 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:51:54.091371   46412 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 22:51:54.221867   46412 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:51:56.306473   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:58.306953   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:55.886624   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:57.887222   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:00.385756   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.225144   46412 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002879 seconds
	I0914 22:52:02.225314   46412 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:02.244705   46412 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:02.778808   46412 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:02.779047   46412 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-588699 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 22:52:03.296381   46412 kubeadm.go:322] [bootstrap-token] Using token: x2l9oo.p0a5g5jx49srzji3
	I0914 22:52:03.297976   46412 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:03.298091   46412 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:03.308475   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:52:03.319954   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:03.325968   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:03.330375   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:03.334686   46412 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:03.353185   46412 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:52:03.622326   46412 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:03.721359   46412 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:03.721385   46412 kubeadm.go:322] 
	I0914 22:52:03.721472   46412 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:03.721486   46412 kubeadm.go:322] 
	I0914 22:52:03.721589   46412 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:03.721602   46412 kubeadm.go:322] 
	I0914 22:52:03.721623   46412 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:03.721678   46412 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:03.721727   46412 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:03.721764   46412 kubeadm.go:322] 
	I0914 22:52:03.721856   46412 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 22:52:03.721867   46412 kubeadm.go:322] 
	I0914 22:52:03.721945   46412 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 22:52:03.721954   46412 kubeadm.go:322] 
	I0914 22:52:03.722029   46412 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:03.722137   46412 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:03.722240   46412 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:03.722250   46412 kubeadm.go:322] 
	I0914 22:52:03.722367   46412 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:52:03.722468   46412 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:03.722479   46412 kubeadm.go:322] 
	I0914 22:52:03.722583   46412 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.722694   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:03.722719   46412 kubeadm.go:322] 	--control-plane 
	I0914 22:52:03.722752   46412 kubeadm.go:322] 
	I0914 22:52:03.722887   46412 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:03.722912   46412 kubeadm.go:322] 
	I0914 22:52:03.723031   46412 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.723170   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:03.724837   46412 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:03.724867   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:52:03.724879   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:03.726645   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:03.728115   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:03.741055   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:03.811746   46412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:03.811823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=embed-certs-588699 minikube.k8s.io/updated_at=2023_09_14T22_52_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:03.811827   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:00.805633   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.805831   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.807503   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.885499   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.886940   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.097721   46412 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:04.097763   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.186240   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.773886   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.273494   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.773993   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.274084   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.773309   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.273666   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.773916   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.274226   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.774073   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.807538   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.306062   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:06.886980   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.385212   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.274041   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:09.773409   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.274272   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.774321   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.274268   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.774250   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.273823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.774000   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.273596   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.774284   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.806015   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:14.308997   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:11.386087   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:12.580003   46713 pod_ready.go:81] duration metric: took 4m0.001053291s waiting for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:12.580035   46713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:12.580062   46713 pod_ready.go:38] duration metric: took 4m1.199260232s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:12.580089   46713 kubeadm.go:640] restartCluster took 4m59.591702038s
	W0914 22:52:12.580145   46713 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:52:12.580169   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:52:14.274174   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:14.773472   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.273376   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.773286   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.273920   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.773334   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.926033   46412 kubeadm.go:1081] duration metric: took 13.114277677s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:16.926076   46412 kubeadm.go:406] StartCluster complete in 5m27.664586228s
	I0914 22:52:16.926099   46412 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.926229   46412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:16.928891   46412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.929177   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:16.929313   46412 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:16.929393   46412 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-588699"
	I0914 22:52:16.929408   46412 addons.go:69] Setting default-storageclass=true in profile "embed-certs-588699"
	I0914 22:52:16.929423   46412 addons.go:69] Setting metrics-server=true in profile "embed-certs-588699"
	I0914 22:52:16.929435   46412 addons.go:231] Setting addon metrics-server=true in "embed-certs-588699"
	W0914 22:52:16.929446   46412 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:16.929446   46412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-588699"
	I0914 22:52:16.929475   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:52:16.929508   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929418   46412 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-588699"
	W0914 22:52:16.929533   46412 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:16.929574   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929907   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929938   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929939   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929963   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929968   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929995   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.948975   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41151
	I0914 22:52:16.948990   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0914 22:52:16.948977   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0914 22:52:16.949953   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950006   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.949957   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950601   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950607   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950620   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950626   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950632   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950647   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.951159   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951191   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951410   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951808   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951829   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.951896   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951906   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.951911   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.961182   46412 addons.go:231] Setting addon default-storageclass=true in "embed-certs-588699"
	W0914 22:52:16.961207   46412 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:16.961236   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.961615   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.961637   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.976517   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0914 22:52:16.976730   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0914 22:52:16.977005   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977161   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977448   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977466   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977564   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977589   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977781   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977913   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977966   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.978108   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.980084   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.980429   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.982113   46412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:16.983227   46412 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:16.984383   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:16.984394   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:16.984407   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.983307   46412 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:16.984439   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:16.984455   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.987850   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0914 22:52:16.987989   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988270   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.988771   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.988788   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.988849   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.988867   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988894   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.989058   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.989528   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.989748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.990151   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.990172   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.990441   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:16.990597   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.990766   46412 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-588699" context rescaled to 1 replicas
	I0914 22:52:16.990794   46412 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:16.992351   46412 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:16.990986   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.991129   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.994003   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:16.994015   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.994097   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.994300   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.994607   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.007652   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0914 22:52:17.008127   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:17.008676   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:17.008699   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:17.009115   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:17.009301   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:17.010905   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:17.011169   46412 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.011183   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:17.011201   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:17.014427   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.014837   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:17.014865   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.015132   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:17.015299   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:17.015435   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:17.015585   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.124720   46412 node_ready.go:35] waiting up to 6m0s for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.124831   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:17.128186   46412 node_ready.go:49] node "embed-certs-588699" has status "Ready":"True"
	I0914 22:52:17.128211   46412 node_ready.go:38] duration metric: took 3.459847ms waiting for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.128221   46412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.133021   46412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138574   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.138594   46412 pod_ready.go:81] duration metric: took 5.550933ms waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138605   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151548   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.151569   46412 pod_ready.go:81] duration metric: took 12.956129ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151581   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169368   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.169393   46412 pod_ready.go:81] duration metric: took 17.803681ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169406   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.180202   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:17.180227   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:17.184052   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:17.227381   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:17.227411   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:17.233773   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.293762   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:17.293788   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:17.328911   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.328934   46412 pod_ready.go:81] duration metric: took 159.520585ms waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.328942   46412 pod_ready.go:38] duration metric: took 200.709608ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.328958   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:17.329008   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:17.379085   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:18.947663   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.822786746s)
	I0914 22:52:18.947705   46412 start.go:917] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:19.171809   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.937996858s)
	I0914 22:52:19.171861   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171872   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.98779094s)
	I0914 22:52:19.171908   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171927   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171878   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171875   46412 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.842825442s)
	I0914 22:52:19.172234   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172277   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172292   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172289   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172307   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172322   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172352   46412 api_server.go:72] duration metric: took 2.181532709s to wait for apiserver process to appear ...
	I0914 22:52:19.172322   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172369   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.172377   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172387   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172396   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172410   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:52:19.172625   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172643   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172657   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172667   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172688   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172715   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172723   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172955   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172969   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.173012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.205041   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:52:19.209533   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:19.209561   46412 api_server.go:131] duration metric: took 37.185195ms to wait for apiserver health ...
	I0914 22:52:19.209573   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:19.225866   46412 system_pods.go:59] 7 kube-system pods found
	I0914 22:52:19.225893   46412 system_pods.go:61] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.225900   46412 system_pods.go:61] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.225908   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.225915   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.225921   46412 system_pods.go:61] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.225928   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.225934   46412 system_pods.go:61] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending
	I0914 22:52:19.225947   46412 system_pods.go:74] duration metric: took 16.366454ms to wait for pod list to return data ...
	I0914 22:52:19.225958   46412 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:19.232176   46412 default_sa.go:45] found service account: "default"
	I0914 22:52:19.232202   46412 default_sa.go:55] duration metric: took 6.234795ms for default service account to be created ...
	I0914 22:52:19.232221   46412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:19.238383   46412 system_pods.go:86] 7 kube-system pods found
	I0914 22:52:19.238415   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.238426   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.238433   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.238442   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.238448   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.238454   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.238463   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.238486   46412 retry.go:31] will retry after 271.864835ms: missing components: kube-dns
	I0914 22:52:19.431792   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.052667289s)
	I0914 22:52:19.431858   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.431875   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432217   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432254   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432265   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432277   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.432291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432561   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432615   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432626   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432637   46412 addons.go:467] Verifying addon metrics-server=true in "embed-certs-588699"
	I0914 22:52:19.434406   46412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:15.499654   45407 pod_ready.go:81] duration metric: took 4m0.00095032s waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:15.499683   45407 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:15.499692   45407 pod_ready.go:38] duration metric: took 4m4.80145633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:15.499709   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:15.499741   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:15.499821   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:15.551531   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:15.551573   45407 cri.go:89] found id: ""
	I0914 22:52:15.551584   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:15.551638   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.555602   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:15.555649   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:15.583476   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:15.583497   45407 cri.go:89] found id: ""
	I0914 22:52:15.583504   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:15.583541   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.587434   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:15.587499   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:15.614791   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:15.614813   45407 cri.go:89] found id: ""
	I0914 22:52:15.614821   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:15.614865   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.618758   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:15.618813   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:15.651772   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:15.651798   45407 cri.go:89] found id: ""
	I0914 22:52:15.651807   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:15.651862   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.656464   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:15.656533   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:15.701258   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:15.701289   45407 cri.go:89] found id: ""
	I0914 22:52:15.701299   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:15.701359   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.705980   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:15.706049   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:15.741616   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:15.741640   45407 cri.go:89] found id: ""
	I0914 22:52:15.741647   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:15.741702   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.745863   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:15.745913   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:15.779362   45407 cri.go:89] found id: ""
	I0914 22:52:15.779385   45407 logs.go:284] 0 containers: []
	W0914 22:52:15.779395   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:15.779403   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:15.779462   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:15.815662   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:15.815691   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.815698   45407 cri.go:89] found id: ""
	I0914 22:52:15.815707   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:15.815781   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.820879   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.826312   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:15.826338   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.864143   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:15.864175   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:16.401646   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:16.401689   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:16.442964   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:16.443000   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:16.612411   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:16.612444   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:16.664620   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:16.664652   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:16.702405   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:16.702432   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:16.738583   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:16.738615   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:16.752752   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:16.752788   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:16.793883   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:16.793924   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:16.825504   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:16.825531   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:16.879008   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:16.879046   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:16.910902   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:16.910941   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.477726   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:19.494214   45407 api_server.go:72] duration metric: took 4m15.689238s to wait for apiserver process to appear ...
	I0914 22:52:19.494240   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.494281   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:19.494341   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:19.534990   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:19.535014   45407 cri.go:89] found id: ""
	I0914 22:52:19.535023   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:19.535081   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.540782   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:19.540850   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:19.570364   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:19.570390   45407 cri.go:89] found id: ""
	I0914 22:52:19.570399   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:19.570465   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.575964   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:19.576027   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:19.608023   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:19.608047   45407 cri.go:89] found id: ""
	I0914 22:52:19.608056   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:19.608098   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.612290   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:19.612343   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:19.644658   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:19.644682   45407 cri.go:89] found id: ""
	I0914 22:52:19.644692   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:19.644743   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.651016   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:19.651092   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:19.693035   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:19.693059   45407 cri.go:89] found id: ""
	I0914 22:52:19.693068   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:19.693122   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.697798   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:19.697864   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:19.733805   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.733828   45407 cri.go:89] found id: ""
	I0914 22:52:19.733837   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:19.733890   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.737902   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:19.737976   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:19.765139   45407 cri.go:89] found id: ""
	I0914 22:52:19.765169   45407 logs.go:284] 0 containers: []
	W0914 22:52:19.765180   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:19.765188   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:19.765248   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:19.793734   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.793756   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:19.793761   45407 cri.go:89] found id: ""
	I0914 22:52:19.793767   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:19.793807   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.797559   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.801472   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:19.801492   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:19.937110   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:19.937138   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.987564   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:19.987599   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.436138   46412 addons.go:502] enable addons completed in 2.506819532s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:19.523044   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.523077   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.523089   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.523096   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.523103   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.523109   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.523115   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.523124   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.523137   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.523164   46412 retry.go:31] will retry after 369.359833ms: missing components: kube-dns
	I0914 22:52:19.900488   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.900529   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.900541   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.900550   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.900558   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.900564   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.900571   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.900587   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.900608   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.900630   46412 retry.go:31] will retry after 329.450987ms: missing components: kube-dns
	I0914 22:52:20.245124   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.245152   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.245160   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.245166   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.245171   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.245177   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.245185   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.245194   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.245204   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.245225   46412 retry.go:31] will retry after 392.738624ms: missing components: kube-dns
	I0914 22:52:20.645671   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.645706   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.645716   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.645725   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.645737   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.645747   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.645756   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.645770   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.645783   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.645803   46412 retry.go:31] will retry after 463.608084ms: missing components: kube-dns
	I0914 22:52:21.118889   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:21.118920   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Running
	I0914 22:52:21.118926   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:21.118931   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:21.118937   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:21.118941   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:21.118946   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:21.118954   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:21.118963   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:21.118971   46412 system_pods.go:126] duration metric: took 1.886741356s to wait for k8s-apps to be running ...
	I0914 22:52:21.118984   46412 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:21.119025   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:21.134331   46412 system_svc.go:56] duration metric: took 15.34035ms WaitForService to wait for kubelet.
	I0914 22:52:21.134358   46412 kubeadm.go:581] duration metric: took 4.143541631s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:21.134381   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:21.137182   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:21.137207   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:21.137230   46412 node_conditions.go:105] duration metric: took 2.834168ms to run NodePressure ...
	I0914 22:52:21.137243   46412 start.go:228] waiting for startup goroutines ...
	I0914 22:52:21.137252   46412 start.go:233] waiting for cluster config update ...
	I0914 22:52:21.137272   46412 start.go:242] writing updated cluster config ...
	I0914 22:52:21.137621   46412 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:21.184252   46412 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:21.186251   46412 out.go:177] * Done! kubectl is now configured to use "embed-certs-588699" cluster and "default" namespace by default
	I0914 22:52:20.022483   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:20.022512   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:20.062375   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:20.062403   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:20.099744   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:20.099776   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:20.129490   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:20.129515   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:20.165896   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:20.165922   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:20.692724   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:20.692758   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:20.761038   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:20.761086   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:20.777087   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:20.777114   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:20.808980   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:20.809020   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:20.845904   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:20.845942   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.393816   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:52:23.399946   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:52:23.401251   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:23.401271   45407 api_server.go:131] duration metric: took 3.907024801s to wait for apiserver health ...
	I0914 22:52:23.401279   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:23.401303   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:23.401346   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:23.433871   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.433895   45407 cri.go:89] found id: ""
	I0914 22:52:23.433905   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:23.433962   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.438254   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:23.438317   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:23.468532   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:23.468555   45407 cri.go:89] found id: ""
	I0914 22:52:23.468564   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:23.468626   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.473599   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:23.473658   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:23.509951   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:23.509976   45407 cri.go:89] found id: ""
	I0914 22:52:23.509986   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:23.510041   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.516637   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:23.516722   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:23.549562   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.549587   45407 cri.go:89] found id: ""
	I0914 22:52:23.549596   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:23.549653   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.553563   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:23.553626   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:23.584728   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:23.584749   45407 cri.go:89] found id: ""
	I0914 22:52:23.584756   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:23.584797   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.588600   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:23.588653   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:23.616590   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.616609   45407 cri.go:89] found id: ""
	I0914 22:52:23.616617   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:23.616669   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.620730   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:23.620782   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:23.648741   45407 cri.go:89] found id: ""
	I0914 22:52:23.648765   45407 logs.go:284] 0 containers: []
	W0914 22:52:23.648773   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:23.648781   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:23.648831   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:23.680814   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:23.680839   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:23.680846   45407 cri.go:89] found id: ""
	I0914 22:52:23.680854   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:23.680914   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.685954   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.690428   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:23.690459   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:23.818421   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:23.818456   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.867863   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:23.867894   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.903362   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:23.903393   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:23.943793   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:23.943820   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:24.538337   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:24.538390   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:24.585031   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:24.585072   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:24.639086   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:24.639120   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:24.650905   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:24.650925   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:24.698547   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:24.698590   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:24.745590   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:24.745619   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:24.777667   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:24.777697   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:24.811536   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:24.811565   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:25.132299   46713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (12.552094274s)
	I0914 22:52:25.132371   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:25.146754   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:52:25.155324   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:52:25.164387   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:52:25.164429   46713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0914 22:52:25.227970   46713 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0914 22:52:25.228029   46713 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:52:25.376482   46713 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:52:25.376603   46713 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:52:25.376721   46713 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:52:25.536163   46713 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:52:25.536339   46713 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:52:25.543555   46713 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0914 22:52:25.663579   46713 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:52:25.665315   46713 out.go:204]   - Generating certificates and keys ...
	I0914 22:52:25.665428   46713 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:52:25.665514   46713 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:52:25.665610   46713 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:52:25.665688   46713 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:52:25.665777   46713 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:52:25.665844   46713 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:52:25.665925   46713 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:52:25.666002   46713 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:52:25.666095   46713 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:52:25.666223   46713 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:52:25.666277   46713 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:52:25.666352   46713 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:52:25.931689   46713 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:52:26.088693   46713 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:52:26.251867   46713 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:52:26.566157   46713 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:52:26.567520   46713 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:52:27.360740   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:52:27.360780   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.360788   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.360795   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.360802   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.360809   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.360816   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.360827   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.360841   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.360848   45407 system_pods.go:74] duration metric: took 3.959563404s to wait for pod list to return data ...
	I0914 22:52:27.360859   45407 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:27.363690   45407 default_sa.go:45] found service account: "default"
	I0914 22:52:27.363715   45407 default_sa.go:55] duration metric: took 2.849311ms for default service account to be created ...
	I0914 22:52:27.363724   45407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:27.372219   45407 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:27.372520   45407 system_pods.go:89] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.372552   45407 system_pods.go:89] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.372571   45407 system_pods.go:89] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.372590   45407 system_pods.go:89] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.372602   45407 system_pods.go:89] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.372616   45407 system_pods.go:89] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.372744   45407 system_pods.go:89] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.372835   45407 system_pods.go:89] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.372845   45407 system_pods.go:126] duration metric: took 9.100505ms to wait for k8s-apps to be running ...
	I0914 22:52:27.372854   45407 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:27.373084   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:27.390112   45407 system_svc.go:56] duration metric: took 17.249761ms WaitForService to wait for kubelet.
	I0914 22:52:27.390137   45407 kubeadm.go:581] duration metric: took 4m23.585167656s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:27.390174   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:27.393099   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:27.393123   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:27.393133   45407 node_conditions.go:105] duration metric: took 2.953927ms to run NodePressure ...
	I0914 22:52:27.393142   45407 start.go:228] waiting for startup goroutines ...
	I0914 22:52:27.393148   45407 start.go:233] waiting for cluster config update ...
	I0914 22:52:27.393156   45407 start.go:242] writing updated cluster config ...
	I0914 22:52:27.393379   45407 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:27.441228   45407 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:27.442889   45407 out.go:177] * Done! kubectl is now configured to use "no-preload-344363" cluster and "default" namespace by default
	I0914 22:52:26.569354   46713 out.go:204]   - Booting up control plane ...
	I0914 22:52:26.569484   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:52:26.582407   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:52:26.589858   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:52:26.591607   46713 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:52:26.596764   46713 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:52:37.101083   46713 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503887 seconds
	I0914 22:52:37.101244   46713 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:37.116094   46713 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:37.633994   46713 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:37.634186   46713 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-930717 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 22:52:38.144071   46713 kubeadm.go:322] [bootstrap-token] Using token: jnf2g9.h0rslaob8wj902ym
	I0914 22:52:38.145543   46713 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:38.145661   46713 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:38.153514   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:38.159575   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:38.164167   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:38.167903   46713 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:38.241317   46713 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:38.572283   46713 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:38.572309   46713 kubeadm.go:322] 
	I0914 22:52:38.572399   46713 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:38.572410   46713 kubeadm.go:322] 
	I0914 22:52:38.572526   46713 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:38.572547   46713 kubeadm.go:322] 
	I0914 22:52:38.572581   46713 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:38.572669   46713 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:38.572762   46713 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:38.572775   46713 kubeadm.go:322] 
	I0914 22:52:38.572836   46713 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:38.572926   46713 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:38.573012   46713 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:38.573020   46713 kubeadm.go:322] 
	I0914 22:52:38.573089   46713 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0914 22:52:38.573152   46713 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:38.573159   46713 kubeadm.go:322] 
	I0914 22:52:38.573222   46713 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573313   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:38.573336   46713 kubeadm.go:322]     --control-plane 	  
	I0914 22:52:38.573343   46713 kubeadm.go:322] 
	I0914 22:52:38.573406   46713 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:38.573414   46713 kubeadm.go:322] 
	I0914 22:52:38.573527   46713 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573687   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:38.574219   46713 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:38.574248   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:52:38.574261   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:38.575900   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:38.577300   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:38.587120   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:38.610197   46713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:38.610265   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.610267   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=old-k8s-version-930717 minikube.k8s.io/updated_at=2023_09_14T22_52_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.858082   46713 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:38.858297   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.960045   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:39.549581   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.049788   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.549998   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.049043   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.549875   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.049596   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.549039   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.049563   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.549663   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.049534   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.549938   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.049227   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.549171   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.049628   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.550019   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.049857   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.549272   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.049648   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.549709   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.049770   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.550050   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.048948   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.549154   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.049695   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.549811   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.049813   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.549858   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.049505   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.149056   46713 kubeadm.go:1081] duration metric: took 14.538858246s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:53.149093   46713 kubeadm.go:406] StartCluster complete in 5m40.2118148s
	I0914 22:52:53.149114   46713 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.149200   46713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:53.150928   46713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.151157   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:53.151287   46713 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:53.151382   46713 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151391   46713 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151405   46713 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-930717"
	I0914 22:52:53.151411   46713 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-930717"
	W0914 22:52:53.151413   46713 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:53.151419   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:52:53.151423   46713 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-930717"
	W0914 22:52:53.151433   46713 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:53.151479   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151412   46713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-930717"
	I0914 22:52:53.151484   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151796   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151820   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151958   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.152044   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.170764   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0914 22:52:53.170912   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0914 22:52:53.171012   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0914 22:52:53.171235   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171345   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171378   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171850   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171870   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171970   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171991   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171999   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.172019   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.172232   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172517   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172572   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172759   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.172910   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.172987   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.173110   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.173146   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.189453   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0914 22:52:53.189789   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.190229   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.190251   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.190646   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.190822   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.192990   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.195176   46713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:53.194738   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0914 22:52:53.196779   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:53.196797   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:53.196813   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.195752   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.197457   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.197476   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.197849   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.198026   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.200022   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.200176   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.201917   46713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:53.200654   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.200795   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.203540   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.203632   46713 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.203652   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.203844   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.204002   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.206460   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.206968   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.206998   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.207153   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.207303   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.207524   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.207672   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.253944   46713 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-930717"
	W0914 22:52:53.253968   46713 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:53.253990   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.254330   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.254377   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0914 22:52:53.270047   46713 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-930717" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0914 22:52:53.270077   46713 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0914 22:52:53.270099   46713 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:53.271730   46713 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:53.270422   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0914 22:52:53.273255   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:53.273653   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.274180   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.274206   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.274559   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.275121   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.275165   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.291000   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0914 22:52:53.291405   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.291906   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.291927   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.292312   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.292529   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.294366   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.294583   46713 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.294598   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:53.294611   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.297265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.297809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297895   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.298057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.298236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.298383   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.344235   46713 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.344478   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:53.350176   46713 node_ready.go:49] node "old-k8s-version-930717" has status "Ready":"True"
	I0914 22:52:53.350196   46713 node_ready.go:38] duration metric: took 5.934445ms waiting for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.350204   46713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:53.359263   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:53.359296   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:53.367792   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:53.384576   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.397687   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:53.397703   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:53.439813   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:53.439843   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:53.473431   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.499877   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:54.233171   46713 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:54.365130   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365156   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365178   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365198   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365438   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365465   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365476   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365481   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.365486   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365546   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365556   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365565   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365574   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367064   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367090   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367068   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367489   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367513   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367526   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.367540   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367489   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367757   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367810   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367852   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.830646   46713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330728839s)
	I0914 22:52:54.830698   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.830711   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831036   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831059   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831065   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.831080   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.831096   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831312   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831328   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831338   46713 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-930717"
	I0914 22:52:54.832992   46713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:54.834828   46713 addons.go:502] enable addons completed in 1.683549699s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:55.415046   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:57.878279   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:59.879299   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:01.879559   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:03.880088   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:05.880334   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.880355   46713 pod_ready.go:81] duration metric: took 12.512536425s waiting for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.880364   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885370   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.885386   46713 pod_ready.go:81] duration metric: took 5.016722ms waiting for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885394   46713 pod_ready.go:38] duration metric: took 12.535181673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:53:05.885413   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:53:05.885466   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:53:05.901504   46713 api_server.go:72] duration metric: took 12.631380008s to wait for apiserver process to appear ...
	I0914 22:53:05.901522   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:53:05.901534   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:53:05.907706   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:53:05.908445   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:53:05.908466   46713 api_server.go:131] duration metric: took 6.937898ms to wait for apiserver health ...
	I0914 22:53:05.908475   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:53:05.911983   46713 system_pods.go:59] 5 kube-system pods found
	I0914 22:53:05.912001   46713 system_pods.go:61] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.912008   46713 system_pods.go:61] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.912013   46713 system_pods.go:61] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.912022   46713 system_pods.go:61] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.912033   46713 system_pods.go:61] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.912043   46713 system_pods.go:74] duration metric: took 3.562804ms to wait for pod list to return data ...
	I0914 22:53:05.912054   46713 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:53:05.914248   46713 default_sa.go:45] found service account: "default"
	I0914 22:53:05.914267   46713 default_sa.go:55] duration metric: took 2.203622ms for default service account to be created ...
	I0914 22:53:05.914276   46713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:53:05.917292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:05.917310   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.917315   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.917319   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.917325   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.917331   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.917343   46713 retry.go:31] will retry after 277.910308ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.201147   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.201170   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.201175   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.201179   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.201185   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.201191   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.201205   46713 retry.go:31] will retry after 262.96693ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.470372   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.470410   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.470418   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.470425   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.470435   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.470446   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.470481   46713 retry.go:31] will retry after 486.428451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.961666   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.961693   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.961700   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.961706   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.961716   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.961724   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.961740   46713 retry.go:31] will retry after 524.467148ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:07.491292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:07.491315   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:07.491321   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:07.491325   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:07.491331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:07.491337   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:07.491370   46713 retry.go:31] will retry after 567.308028ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.063587   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.063612   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.063618   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.063622   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.063629   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.063635   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.063649   46713 retry.go:31] will retry after 723.150919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.791530   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.791561   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.791571   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.791578   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.791588   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.791597   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.791616   46713 retry.go:31] will retry after 1.173741151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:09.971866   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:09.971895   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:09.971903   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:09.971909   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:09.971919   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:09.971928   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:09.971946   46713 retry.go:31] will retry after 1.046713916s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:11.024191   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:11.024220   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:11.024226   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:11.024231   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:11.024238   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:11.024244   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:11.024260   46713 retry.go:31] will retry after 1.531910243s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:12.562517   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:12.562555   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:12.562564   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:12.562573   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:12.562584   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:12.562594   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:12.562612   46713 retry.go:31] will retry after 2.000243773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:14.570247   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:14.570284   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:14.570294   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:14.570303   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:14.570320   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:14.570329   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:14.570346   46713 retry.go:31] will retry after 2.095330784s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:16.670345   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:16.670372   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:16.670377   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:16.670382   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:16.670394   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:16.670401   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:16.670416   46713 retry.go:31] will retry after 2.811644755s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:19.488311   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:19.488339   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:19.488344   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:19.488348   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:19.488354   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:19.488362   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:19.488380   46713 retry.go:31] will retry after 3.274452692s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:22.768417   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:22.768446   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:22.768454   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:22.768461   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:22.768471   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:22.768481   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:22.768499   46713 retry.go:31] will retry after 5.52037196s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:28.294932   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:28.294958   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:28.294964   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:28.294967   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:28.294975   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:28.294980   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:28.294994   46713 retry.go:31] will retry after 4.305647383s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:32.605867   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:32.605894   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:32.605900   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:32.605903   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:32.605910   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:32.605915   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:32.605929   46713 retry.go:31] will retry after 8.214918081s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:40.825284   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:40.825314   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:40.825319   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:40.825324   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:40.825331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:40.825336   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:40.825352   46713 retry.go:31] will retry after 10.5220598s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:51.353809   46713 system_pods.go:86] 7 kube-system pods found
	I0914 22:53:51.353844   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:51.353851   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:51.353856   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Pending
	I0914 22:53:51.353862   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:51.353868   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Pending
	I0914 22:53:51.353878   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:51.353887   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:51.353907   46713 retry.go:31] will retry after 10.482387504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:54:01.842876   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:01.842900   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:01.842905   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:01.842909   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Pending
	I0914 22:54:01.842914   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:01.842918   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Pending
	I0914 22:54:01.842921   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:01.842925   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:01.842931   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:01.842937   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:01.842950   46713 retry.go:31] will retry after 14.535469931s: missing components: etcd, kube-controller-manager
	I0914 22:54:16.384703   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:16.384732   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:16.384738   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:16.384742   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Running
	I0914 22:54:16.384747   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:16.384751   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Running
	I0914 22:54:16.384754   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:16.384758   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:16.384766   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:16.384773   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:16.384782   46713 system_pods.go:126] duration metric: took 1m10.470499333s to wait for k8s-apps to be running ...
	I0914 22:54:16.384791   46713 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:54:16.384849   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:54:16.409329   46713 system_svc.go:56] duration metric: took 24.530447ms WaitForService to wait for kubelet.
	I0914 22:54:16.409359   46713 kubeadm.go:581] duration metric: took 1m23.139238057s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:54:16.409385   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:54:16.412461   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:54:16.412490   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:54:16.412505   46713 node_conditions.go:105] duration metric: took 3.107771ms to run NodePressure ...
	I0914 22:54:16.412519   46713 start.go:228] waiting for startup goroutines ...
	I0914 22:54:16.412529   46713 start.go:233] waiting for cluster config update ...
	I0914 22:54:16.412546   46713 start.go:242] writing updated cluster config ...
	I0914 22:54:16.412870   46713 ssh_runner.go:195] Run: rm -f paused
	I0914 22:54:16.460181   46713 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I0914 22:54:16.461844   46713 out.go:177] 
	W0914 22:54:16.463221   46713 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0914 22:54:16.464486   46713 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0914 22:54:16.465912   46713 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-930717" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:47:15 UTC, ends at Thu 2023-09-14 23:01:29 UTC. --
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.719437184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab9
4b848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cc085eea-7316-4ff8-b00f-20d50ece5287 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.893691714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=edcb32f3-1f0d-455e-b14f-3bf2234828f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.893779663Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=edcb32f3-1f0d-455e-b14f-3bf2234828f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.894014530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=edcb32f3-1f0d-455e-b14f-3bf2234828f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.926774860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2e2f08ef-49d2-44d3-835b-36cb93cba1b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.926862537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2e2f08ef-49d2-44d3-835b-36cb93cba1b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.927126854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2e2f08ef-49d2-44d3-835b-36cb93cba1b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.959601933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dff9155d-b542-42a3-bacb-f1d4baf6da78 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.959697132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dff9155d-b542-42a3-bacb-f1d4baf6da78 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.959978885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dff9155d-b542-42a3-bacb-f1d4baf6da78 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.994500186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f2f404f1-4c99-452b-81bb-7faddd4ea43a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.994563491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f2f404f1-4c99-452b-81bb-7faddd4ea43a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:28 no-preload-344363 crio[722]: time="2023-09-14 23:01:28.994761998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f2f404f1-4c99-452b-81bb-7faddd4ea43a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.040297281Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e7e0cd78-156d-48dd-856f-7354979dcad6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.040391859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e7e0cd78-156d-48dd-856f-7354979dcad6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.040630633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e7e0cd78-156d-48dd-856f-7354979dcad6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.072573584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5666a176-ac9b-4e0c-aac7-f1b6a6f34b3f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.072639717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5666a176-ac9b-4e0c-aac7-f1b6a6f34b3f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.072927948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5666a176-ac9b-4e0c-aac7-f1b6a6f34b3f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.106455218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a27fd02f-1482-4748-a166-4ba25dd04562 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.106515092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a27fd02f-1482-4748-a166-4ba25dd04562 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.106709487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a27fd02f-1482-4748-a166-4ba25dd04562 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.139542391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bb6ab755-5138-416a-b3ea-bdc40b38830b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.139630450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bb6ab755-5138-416a-b3ea-bdc40b38830b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:01:29 no-preload-344363 crio[722]: time="2023-09-14 23:01:29.139911831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bb6ab755-5138-416a-b3ea-bdc40b38830b name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	0d6da8266a65b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   48e581734bb71
	97022350cc3ee       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   6f3da613ffbe9
	8a06ddba66f0a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   dc7ce60e4ea6b
	eb1a03278a771       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      13 minutes ago      Running             kube-proxy                1                   194b6c7a64b01
	a554481de89e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   48e581734bb71
	d670d4deec4bc       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      13 minutes ago      Running             kube-controller-manager   1                   2314bbd92316d
	6fa0d09d74d54       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      13 minutes ago      Running             kube-scheduler            1                   8bc4a7d7f02be
	db7177e981567       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   3a1835e739744
	33222eae96b0a       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      13 minutes ago      Running             kube-apiserver            1                   b588cc7554b07
	
	* 
	* ==> coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56337 - 12331 "HINFO IN 315502276035198041.3823794961810864963. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015342469s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-344363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-344363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=no-preload-344363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_38_24_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:38:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-344363
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 23:01:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:58:44 +0000   Thu, 14 Sep 2023 22:38:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:58:44 +0000   Thu, 14 Sep 2023 22:38:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:58:44 +0000   Thu, 14 Sep 2023 22:38:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:58:44 +0000   Thu, 14 Sep 2023 22:48:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    no-preload-344363
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8881348dd73843818e568e820cb8ced5
	  System UUID:                8881348d-d738-4381-8e56-8e820cb8ced5
	  Boot ID:                    3315b2a3-ec47-4527-946a-63c262d71b01
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5dd5756b68-rntdg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-no-preload-344363                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-344363             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-344363    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-zzkbp                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-no-preload-344363             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-57f55c9bc5-swnnf              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-344363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-344363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-344363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node no-preload-344363 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-344363 event: Registered Node no-preload-344363 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-344363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-344363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-344363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-344363 event: Registered Node no-preload-344363 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep14 22:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.079398] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.601158] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.857565] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135211] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.461894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.344272] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.116087] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.145073] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.117210] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.241417] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[ +31.042084] systemd-fstab-generator[1226]: Ignoring "noauto" for root device
	[Sep14 22:48] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] <==
	* {"level":"info","ts":"2023-09-14T22:47:57.5238Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:47:57.524152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a switched to configuration voters=(1901133809061542250)"}
	{"level":"info","ts":"2023-09-14T22:47:57.524294Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","added-peer-id":"1a622f206f99396a","added-peer-peer-urls":["https://192.168.39.60:2380"]}
	{"level":"info","ts":"2023-09-14T22:47:57.524388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:47:57.524433Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:47:57.524219Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:47:58.685286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-14T22:47:58.685427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-14T22:47:58.685486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgPreVoteResp from 1a622f206f99396a at term 2"}
	{"level":"info","ts":"2023-09-14T22:47:58.685526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:47:58.685558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgVoteResp from 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2023-09-14T22:47:58.685588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became leader at term 3"}
	{"level":"info","ts":"2023-09-14T22:47:58.685616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a622f206f99396a elected leader 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2023-09-14T22:47:58.694429Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1a622f206f99396a","local-member-attributes":"{Name:no-preload-344363 ClientURLs:[https://192.168.39.60:2379]}","request-path":"/0/members/1a622f206f99396a/attributes","cluster-id":"94dd135126e1e7b0","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:47:58.695307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:47:58.696395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:47:58.696552Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:47:58.713495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.60:2379"}
	{"level":"info","ts":"2023-09-14T22:47:58.722612Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:47:58.722654Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-09-14T22:48:00.574704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.196116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/no-preload-344363\" ","response":"range_response_count:1 size:691"}
	{"level":"info","ts":"2023-09-14T22:48:00.574891Z","caller":"traceutil/trace.go:171","msg":"trace[462883843] range","detail":"{range_begin:/registry/csinodes/no-preload-344363; range_end:; response_count:1; response_revision:420; }","duration":"100.382326ms","start":"2023-09-14T22:48:00.474484Z","end":"2023-09-14T22:48:00.574866Z","steps":["trace[462883843] 'agreement among raft nodes before linearized reading'  (duration: 98.904758ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:57:58.811718Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":771}
	{"level":"info","ts":"2023-09-14T22:57:58.814784Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":771,"took":"2.694983ms","hash":1002913574}
	{"level":"info","ts":"2023-09-14T22:57:58.814848Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1002913574,"revision":771,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  23:01:29 up 14 min,  0 users,  load average: 0.14, 0.15, 0.12
	Linux no-preload-344363 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:58:01.472819       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 22:58:01.472947       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:58:01.473043       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 22:58:01.474295       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 22:59:00.377827       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.218.65:443: connect: connection refused
	I0914 22:59:00.377897       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 22:59:01.473932       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:59:01.474078       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:59:01.474106       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 22:59:01.475293       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:59:01.475358       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 22:59:01.475383       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:00:00.377623       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.218.65:443: connect: connection refused
	I0914 23:00:00.377859       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 23:01:00.377699       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.218.65:443: connect: connection refused
	I0914 23:01:00.377934       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 23:01:01.474742       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:01:01.474883       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:01:01.474897       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 23:01:01.476248       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:01:01.476352       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 23:01:01.476382       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] <==
	* I0914 22:55:43.514264       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:56:13.050074       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:56:13.526016       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:56:43.056504       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:56:43.534481       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:57:13.063251       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:57:13.543685       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:57:43.069797       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:57:43.551837       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:58:13.075716       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:58:13.565925       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:58:43.080805       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:58:43.574622       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 22:59:13.086654       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:59:13.583905       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 22:59:13.737539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="395.164µs"
	I0914 22:59:26.734107       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="173.252µs"
	E0914 22:59:43.092308       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 22:59:43.592043       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:00:13.097875       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:00:13.600551       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:00:43.103877       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:00:43.609834       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:01:13.109537       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:01:13.618418       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] <==
	* I0914 22:48:01.863468       1 server_others.go:69] "Using iptables proxy"
	I0914 22:48:01.886884       1 node.go:141] Successfully retrieved node IP: 192.168.39.60
	I0914 22:48:01.931005       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:48:01.931061       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:48:01.933747       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:48:01.934509       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:48:01.934914       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:48:01.934967       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:48:01.937360       1 config.go:188] "Starting service config controller"
	I0914 22:48:01.937967       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:48:01.938037       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:48:01.938065       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:48:01.940521       1 config.go:315] "Starting node config controller"
	I0914 22:48:01.940573       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:48:02.038468       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 22:48:02.038489       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:48:02.040693       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] <==
	* I0914 22:47:58.555719       1 serving.go:348] Generated self-signed cert in-memory
	I0914 22:48:00.499895       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:48:00.500014       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:48:00.582485       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0914 22:48:00.582571       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0914 22:48:00.582796       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:48:00.582896       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:48:00.582949       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0914 22:48:00.582981       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0914 22:48:00.585567       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:48:00.585698       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:48:00.684423       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0914 22:48:00.684604       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:48:00.689288       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:47:15 UTC, ends at Thu 2023-09-14 23:01:29 UTC. --
	Sep 14 22:58:54 no-preload-344363 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:58:54 no-preload-344363 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:58:54 no-preload-344363 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 22:58:58 no-preload-344363 kubelet[1232]: E0914 22:58:58.729493    1232 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 14 22:58:58 no-preload-344363 kubelet[1232]: E0914 22:58:58.729552    1232 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 14 22:58:58 no-preload-344363 kubelet[1232]: E0914 22:58:58.729812    1232 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pc7k5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-swnnf_kube-system(4b0db27e-c36f-452e-8ed5-57027bf9ab99): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 14 22:58:58 no-preload-344363 kubelet[1232]: E0914 22:58:58.729863    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 22:59:13 no-preload-344363 kubelet[1232]: E0914 22:59:13.716316    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 22:59:26 no-preload-344363 kubelet[1232]: E0914 22:59:26.718071    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 22:59:40 no-preload-344363 kubelet[1232]: E0914 22:59:40.718773    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 22:59:51 no-preload-344363 kubelet[1232]: E0914 22:59:51.716907    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 22:59:54 no-preload-344363 kubelet[1232]: E0914 22:59:54.834551    1232 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 22:59:54 no-preload-344363 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 22:59:54 no-preload-344363 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 22:59:54 no-preload-344363 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:00:04 no-preload-344363 kubelet[1232]: E0914 23:00:04.717108    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:00:19 no-preload-344363 kubelet[1232]: E0914 23:00:19.717213    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:00:34 no-preload-344363 kubelet[1232]: E0914 23:00:34.717125    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:00:49 no-preload-344363 kubelet[1232]: E0914 23:00:49.717519    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:00:54 no-preload-344363 kubelet[1232]: E0914 23:00:54.833344    1232 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 23:00:54 no-preload-344363 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:00:54 no-preload-344363 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:00:54 no-preload-344363 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:01:01 no-preload-344363 kubelet[1232]: E0914 23:01:01.716722    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:01:16 no-preload-344363 kubelet[1232]: E0914 23:01:16.717773    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	
	* 
	* ==> storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] <==
	* I0914 22:48:32.007675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:48:32.019963       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:48:32.020076       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:48:49.422811       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:48:49.423275       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-344363_8d3ecd0d-6913-482e-9050-e4f8e3b81f4a!
	I0914 22:48:49.423376       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d18b0ddf-5cd9-4d5d-8650-5ce9016e413a", APIVersion:"v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-344363_8d3ecd0d-6913-482e-9050-e4f8e3b81f4a became leader
	I0914 22:48:49.524535       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-344363_8d3ecd0d-6913-482e-9050-e4f8e3b81f4a!
	
	* 
	* ==> storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] <==
	* I0914 22:48:01.631789       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 22:48:31.634337       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-344363 -n no-preload-344363
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-344363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-swnnf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-344363 describe pod metrics-server-57f55c9bc5-swnnf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-344363 describe pod metrics-server-57f55c9bc5-swnnf: exit status 1 (62.182673ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-swnnf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-344363 describe pod metrics-server-57f55c9bc5-swnnf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0914 22:54:29.765343   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:55:52.812020   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:56:36.475194   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 22:58:32.188362   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 22:59:29.764525   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:59:55.238977   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-930717 -n old-k8s-version-930717
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-14 23:03:17.01640492 +0000 UTC m=+5219.210746559
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930717 -n old-k8s-version-930717
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-930717 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-930717 logs -n 25: (1.544168466s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-711912                           | kubernetes-upgrade-711912    | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:36 UTC |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-344363             | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:40 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799144  | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC |                     |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-344363                  | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-588699            | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799144       | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-930717        | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:51 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-588699                 | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-930717             | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC | 14 Sep 23 22:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:45:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:45:20.513575   46713 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:45:20.513835   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.513847   46713 out.go:309] Setting ErrFile to fd 2...
	I0914 22:45:20.513852   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.514030   46713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:45:20.514571   46713 out.go:303] Setting JSON to false
	I0914 22:45:20.515550   46713 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5263,"bootTime":1694726258,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:45:20.515607   46713 start.go:138] virtualization: kvm guest
	I0914 22:45:20.517738   46713 out.go:177] * [old-k8s-version-930717] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:45:20.519301   46713 notify.go:220] Checking for updates...
	I0914 22:45:20.519309   46713 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:45:20.520886   46713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:45:20.522525   46713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:45:20.524172   46713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:45:20.525826   46713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:45:20.527204   46713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:45:20.529068   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:45:20.529489   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.529542   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.548088   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0914 22:45:20.548488   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.548969   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.548985   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.549404   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.549555   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.551507   46713 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 22:45:20.552878   46713 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:45:20.553145   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.553176   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.566825   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0914 22:45:20.567181   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.567617   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.567646   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.568018   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.568195   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.601886   46713 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:45:20.603176   46713 start.go:298] selected driver: kvm2
	I0914 22:45:20.603188   46713 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.603284   46713 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:45:20.603926   46713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.603997   46713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:45:20.617678   46713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:45:20.618009   46713 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:45:20.618045   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:45:20.618062   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:45:20.618075   46713 start_flags.go:321] config:
	{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.618204   46713 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.619892   46713 out.go:177] * Starting control plane node old-k8s-version-930717 in cluster old-k8s-version-930717
	I0914 22:45:22.939748   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:20.621146   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:45:20.621171   46713 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0914 22:45:20.621184   46713 cache.go:57] Caching tarball of preloaded images
	I0914 22:45:20.621265   46713 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 22:45:20.621286   46713 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0914 22:45:20.621381   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:45:20.621551   46713 start.go:365] acquiring machines lock for old-k8s-version-930717: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:45:29.019730   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:32.091705   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:38.171724   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:41.243661   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:47.323733   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:50.395751   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:56.475703   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:59.547782   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:46:02.551591   45954 start.go:369] acquired machines lock for "default-k8s-diff-port-799144" in 3m15.018428257s
	I0914 22:46:02.551631   45954 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:02.551642   45954 fix.go:54] fixHost starting: 
	I0914 22:46:02.551944   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:02.551972   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:02.566520   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0914 22:46:02.566922   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:02.567373   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:02.567392   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:02.567734   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:02.567961   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:02.568128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:02.569692   45954 fix.go:102] recreateIfNeeded on default-k8s-diff-port-799144: state=Stopped err=<nil>
	I0914 22:46:02.569714   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	W0914 22:46:02.569887   45954 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:02.571684   45954 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799144" ...
	I0914 22:46:02.549458   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:02.549490   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:46:02.551419   45407 machine.go:91] provisioned docker machine in 4m37.435317847s
	I0914 22:46:02.551457   45407 fix.go:56] fixHost completed within 4m37.455553972s
	I0914 22:46:02.551462   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 4m37.455581515s
	W0914 22:46:02.551502   45407 start.go:688] error starting host: provision: host is not running
	W0914 22:46:02.551586   45407 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0914 22:46:02.551600   45407 start.go:703] Will try again in 5 seconds ...
	I0914 22:46:02.573354   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Start
	I0914 22:46:02.573535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring networks are active...
	I0914 22:46:02.574326   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network default is active
	I0914 22:46:02.574644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network mk-default-k8s-diff-port-799144 is active
	I0914 22:46:02.575046   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Getting domain xml...
	I0914 22:46:02.575767   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Creating domain...
	I0914 22:46:03.792613   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting to get IP...
	I0914 22:46:03.793573   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.793932   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.794029   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:03.793928   46868 retry.go:31] will retry after 250.767464ms: waiting for machine to come up
	I0914 22:46:04.046447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046928   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.046853   46868 retry.go:31] will retry after 320.29371ms: waiting for machine to come up
	I0914 22:46:04.368383   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368782   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368814   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.368726   46868 retry.go:31] will retry after 295.479496ms: waiting for machine to come up
	I0914 22:46:04.666192   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666655   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666680   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.666595   46868 retry.go:31] will retry after 572.033699ms: waiting for machine to come up
	I0914 22:46:05.240496   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240920   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240953   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.240872   46868 retry.go:31] will retry after 493.557238ms: waiting for machine to come up
	I0914 22:46:05.735682   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736201   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.736150   46868 retry.go:31] will retry after 848.645524ms: waiting for machine to come up
	I0914 22:46:06.586116   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586568   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:06.586473   46868 retry.go:31] will retry after 866.110647ms: waiting for machine to come up
	I0914 22:46:07.553803   45407 start.go:365] acquiring machines lock for no-preload-344363: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:46:07.454431   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454798   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454827   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:07.454743   46868 retry.go:31] will retry after 1.485337575s: waiting for machine to come up
	I0914 22:46:08.941761   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942136   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942177   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:08.942104   46868 retry.go:31] will retry after 1.640651684s: waiting for machine to come up
	I0914 22:46:10.584576   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584939   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:10.584838   46868 retry.go:31] will retry after 1.656716681s: waiting for machine to come up
	I0914 22:46:12.243599   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244096   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:12.244037   46868 retry.go:31] will retry after 2.692733224s: waiting for machine to come up
	I0914 22:46:14.939726   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940035   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940064   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:14.939986   46868 retry.go:31] will retry after 2.745837942s: waiting for machine to come up
	I0914 22:46:22.180177   46412 start.go:369] acquired machines lock for "embed-certs-588699" in 2m3.238409394s
	I0914 22:46:22.180244   46412 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:22.180256   46412 fix.go:54] fixHost starting: 
	I0914 22:46:22.180661   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:22.180706   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:22.196558   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0914 22:46:22.196900   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:22.197304   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:46:22.197326   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:22.197618   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:22.197808   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:22.197986   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:46:22.199388   46412 fix.go:102] recreateIfNeeded on embed-certs-588699: state=Stopped err=<nil>
	I0914 22:46:22.199423   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	W0914 22:46:22.199595   46412 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:22.202757   46412 out.go:177] * Restarting existing kvm2 VM for "embed-certs-588699" ...
	I0914 22:46:17.687397   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687911   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687937   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:17.687878   46868 retry.go:31] will retry after 3.174192278s: waiting for machine to come up
	I0914 22:46:20.866173   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866687   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Found IP for machine: 192.168.50.175
	I0914 22:46:20.866722   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has current primary IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866737   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserving static IP address...
	I0914 22:46:20.867209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.867245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | skip adding static IP to network mk-default-k8s-diff-port-799144 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"}
	I0914 22:46:20.867263   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserved static IP address: 192.168.50.175
	I0914 22:46:20.867290   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for SSH to be available...
	I0914 22:46:20.867303   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Getting to WaitForSSH function...
	I0914 22:46:20.869597   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.869960   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.869993   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.870103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH client type: external
	I0914 22:46:20.870137   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa (-rw-------)
	I0914 22:46:20.870193   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:20.870218   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | About to run SSH command:
	I0914 22:46:20.870237   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | exit 0
	I0914 22:46:20.959125   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:20.959456   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetConfigRaw
	I0914 22:46:20.960082   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:20.962512   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.962889   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.962915   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.963114   45954 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/config.json ...
	I0914 22:46:20.963282   45954 machine.go:88] provisioning docker machine ...
	I0914 22:46:20.963300   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:20.963509   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963682   45954 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799144"
	I0914 22:46:20.963709   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963899   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:20.966359   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966728   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.966757   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966956   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:20.967146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967287   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967420   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:20.967584   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:20.967963   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:20.967983   45954 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799144 && echo "default-k8s-diff-port-799144" | sudo tee /etc/hostname
	I0914 22:46:21.098114   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799144
	
	I0914 22:46:21.098158   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.100804   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101167   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.101208   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.101532   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101855   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.102028   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.102386   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.102406   45954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799144/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:21.225929   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:21.225964   45954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:21.225992   45954 buildroot.go:174] setting up certificates
	I0914 22:46:21.226007   45954 provision.go:83] configureAuth start
	I0914 22:46:21.226023   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:21.226299   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:21.229126   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229514   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.229555   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.231683   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.231992   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.232027   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.232179   45954 provision.go:138] copyHostCerts
	I0914 22:46:21.232233   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:21.232247   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:21.232321   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:21.232412   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:21.232421   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:21.232446   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:21.232542   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:21.232551   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:21.232572   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:21.232617   45954 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799144 san=[192.168.50.175 192.168.50.175 localhost 127.0.0.1 minikube default-k8s-diff-port-799144]
	I0914 22:46:21.489180   45954 provision.go:172] copyRemoteCerts
	I0914 22:46:21.489234   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:21.489257   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.491989   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492308   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.492334   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.492734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.492869   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.493038   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:21.579991   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0914 22:46:21.599819   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:21.619391   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:21.638607   45954 provision.go:86] duration metric: configureAuth took 412.585328ms
	I0914 22:46:21.638629   45954 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:21.638797   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:21.638867   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.641693   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642033   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.642067   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.642399   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642562   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.642900   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.643239   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.643257   45954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:21.928913   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:21.928940   45954 machine.go:91] provisioned docker machine in 965.645328ms
	I0914 22:46:21.928952   45954 start.go:300] post-start starting for "default-k8s-diff-port-799144" (driver="kvm2")
	I0914 22:46:21.928964   45954 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:21.928987   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:21.929377   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:21.929425   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.931979   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932350   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.932388   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932475   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.932704   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.932923   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.933059   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.020329   45954 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:22.024444   45954 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:22.024458   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:22.024513   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:22.024589   45954 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:22.024672   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:22.033456   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:22.054409   45954 start.go:303] post-start completed in 125.445528ms
	I0914 22:46:22.054427   45954 fix.go:56] fixHost completed within 19.502785226s
	I0914 22:46:22.054444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.057353   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057690   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.057721   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057925   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.058139   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058304   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058483   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.058657   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:22.059051   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:22.059065   45954 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:22.180023   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731582.133636857
	
	I0914 22:46:22.180044   45954 fix.go:206] guest clock: 1694731582.133636857
	I0914 22:46:22.180054   45954 fix.go:219] Guest: 2023-09-14 22:46:22.133636857 +0000 UTC Remote: 2023-09-14 22:46:22.054430307 +0000 UTC m=+214.661061156 (delta=79.20655ms)
	I0914 22:46:22.180078   45954 fix.go:190] guest clock delta is within tolerance: 79.20655ms
	I0914 22:46:22.180084   45954 start.go:83] releasing machines lock for "default-k8s-diff-port-799144", held for 19.628473828s
	I0914 22:46:22.180114   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.180408   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:22.183182   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183507   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.183543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183675   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184175   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184384   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184494   45954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:22.184535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.184627   45954 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:22.184662   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.187447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187604   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187813   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.187839   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187971   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.187986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.188024   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.188151   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.188153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188344   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188391   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188500   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.188519   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188618   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.303009   45954 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:22.308185   45954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:22.450504   45954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:22.455642   45954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:22.455700   45954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:22.468430   45954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:22.468453   45954 start.go:469] detecting cgroup driver to use...
	I0914 22:46:22.468509   45954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:22.483524   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:22.494650   45954 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:22.494706   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:22.506589   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:22.518370   45954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:22.619545   45954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:22.737486   45954 docker.go:212] disabling docker service ...
	I0914 22:46:22.737551   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:22.749267   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:22.759866   45954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:22.868561   45954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:22.973780   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:22.986336   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:23.004987   45954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:23.005042   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.013821   45954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:23.013889   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.022487   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.030875   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.038964   45954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:23.047246   45954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:23.054339   45954 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:23.054379   45954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:23.066649   45954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:23.077024   45954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:23.174635   45954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:23.337031   45954 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:23.337113   45954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:23.342241   45954 start.go:537] Will wait 60s for crictl version
	I0914 22:46:23.342308   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:46:23.345832   45954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:23.377347   45954 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:23.377433   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.425559   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.492770   45954 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:22.203936   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Start
	I0914 22:46:22.204098   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring networks are active...
	I0914 22:46:22.204740   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network default is active
	I0914 22:46:22.205158   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network mk-embed-certs-588699 is active
	I0914 22:46:22.205524   46412 main.go:141] libmachine: (embed-certs-588699) Getting domain xml...
	I0914 22:46:22.206216   46412 main.go:141] libmachine: (embed-certs-588699) Creating domain...
	I0914 22:46:23.529479   46412 main.go:141] libmachine: (embed-certs-588699) Waiting to get IP...
	I0914 22:46:23.530274   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.530639   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.530694   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.530608   46986 retry.go:31] will retry after 299.617651ms: waiting for machine to come up
	I0914 22:46:23.494065   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:23.496974   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497458   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:23.497490   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497694   45954 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:23.501920   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:23.517500   45954 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:23.517542   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:23.554344   45954 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:23.554403   45954 ssh_runner.go:195] Run: which lz4
	I0914 22:46:23.558745   45954 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:23.563443   45954 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:23.563488   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:25.365372   45954 crio.go:444] Took 1.806660 seconds to copy over tarball
	I0914 22:46:25.365442   45954 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:23.832332   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.833457   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.833488   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.832911   46986 retry.go:31] will retry after 315.838121ms: waiting for machine to come up
	I0914 22:46:24.150532   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.150980   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.151009   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.150942   46986 retry.go:31] will retry after 369.928332ms: waiting for machine to come up
	I0914 22:46:24.522720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.523232   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.523257   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.523145   46986 retry.go:31] will retry after 533.396933ms: waiting for machine to come up
	I0914 22:46:25.057818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.058371   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.058405   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.058318   46986 retry.go:31] will retry after 747.798377ms: waiting for machine to come up
	I0914 22:46:25.807422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.807912   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.807956   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.807874   46986 retry.go:31] will retry after 947.037376ms: waiting for machine to come up
	I0914 22:46:26.756214   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:26.756720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:26.756757   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:26.756689   46986 retry.go:31] will retry after 1.117164865s: waiting for machine to come up
	I0914 22:46:27.875432   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:27.875931   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:27.875953   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:27.875886   46986 retry.go:31] will retry after 1.117181084s: waiting for machine to come up
	I0914 22:46:28.197684   45954 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.832216899s)
	I0914 22:46:28.197710   45954 crio.go:451] Took 2.832313 seconds to extract the tarball
	I0914 22:46:28.197718   45954 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:28.236545   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:28.286349   45954 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:28.286374   45954 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:28.286449   45954 ssh_runner.go:195] Run: crio config
	I0914 22:46:28.344205   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:28.344231   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:28.344253   45954 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:28.344289   45954 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.175 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799144 NodeName:default-k8s-diff-port-799144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:28.344454   45954 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.175
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799144"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:28.344536   45954 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-799144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0914 22:46:28.344591   45954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:28.354383   45954 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:28.354459   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:28.363277   45954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0914 22:46:28.378875   45954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:28.393535   45954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0914 22:46:28.408319   45954 ssh_runner.go:195] Run: grep 192.168.50.175	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:28.411497   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:28.421507   45954 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144 for IP: 192.168.50.175
	I0914 22:46:28.421536   45954 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:28.421702   45954 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:28.421742   45954 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:28.421805   45954 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.key
	I0914 22:46:28.421858   45954 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key.0216c1e7
	I0914 22:46:28.421894   45954 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key
	I0914 22:46:28.421994   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:28.422020   45954 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:28.422027   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:28.422048   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:28.422074   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:28.422095   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:28.422139   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:28.422695   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:28.443528   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:46:28.463679   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:28.483317   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:28.503486   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:28.523709   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:28.544539   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:28.565904   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:28.587316   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:28.611719   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:28.632158   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:28.652227   45954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:28.667709   45954 ssh_runner.go:195] Run: openssl version
	I0914 22:46:28.673084   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:28.682478   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686693   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686747   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.691836   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:28.701203   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:28.710996   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715353   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715408   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.720765   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:28.730750   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:28.740782   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745186   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745250   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.750589   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:28.760675   45954 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:28.764920   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:28.770573   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:28.776098   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:28.783455   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:28.790699   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:28.797514   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:28.804265   45954 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-799144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:28.804376   45954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:28.804427   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:28.833994   45954 cri.go:89] found id: ""
	I0914 22:46:28.834051   45954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:28.843702   45954 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:28.843724   45954 kubeadm.go:636] restartCluster start
	I0914 22:46:28.843769   45954 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:28.852802   45954 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.854420   45954 kubeconfig.go:92] found "default-k8s-diff-port-799144" server: "https://192.168.50.175:8444"
	I0914 22:46:28.858058   45954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:28.866914   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.866968   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.877946   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.877969   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.878014   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.888579   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.389311   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.389420   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.401725   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.889346   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.889451   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.902432   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.388985   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.389062   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.401302   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.888853   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.888949   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.901032   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.389622   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.389733   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.405102   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.888685   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.888803   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.904300   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:32.388876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.388944   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.402419   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.995080   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:28.999205   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:28.999224   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:28.995414   46986 retry.go:31] will retry after 1.657878081s: waiting for machine to come up
	I0914 22:46:30.655422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:30.656029   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:30.656059   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:30.655960   46986 retry.go:31] will retry after 2.320968598s: waiting for machine to come up
	I0914 22:46:32.978950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:32.979423   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:32.979452   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:32.979369   46986 retry.go:31] will retry after 2.704173643s: waiting for machine to come up
	I0914 22:46:32.889585   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.889658   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.902514   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.388806   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.388906   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.405028   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.889633   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.889728   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.906250   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.388736   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.388810   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.403376   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.888851   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.888934   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.905873   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.389446   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.389516   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.404872   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.889475   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.889569   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.902431   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.388954   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.389054   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.401778   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.889442   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.889529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.902367   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:37.388925   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.389009   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.401860   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.685608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:35.686027   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:35.686064   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:35.685964   46986 retry.go:31] will retry after 2.240780497s: waiting for machine to come up
	I0914 22:46:37.928020   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:37.928402   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:37.928442   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:37.928354   46986 retry.go:31] will retry after 2.734049647s: waiting for machine to come up
	I0914 22:46:41.860186   46713 start.go:369] acquired machines lock for "old-k8s-version-930717" in 1m21.238611742s
	I0914 22:46:41.860234   46713 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:41.860251   46713 fix.go:54] fixHost starting: 
	I0914 22:46:41.860683   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:41.860738   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:41.877474   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0914 22:46:41.877964   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:41.878542   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:46:41.878568   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:41.878874   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:41.879057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:46:41.879276   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:46:41.880990   46713 fix.go:102] recreateIfNeeded on old-k8s-version-930717: state=Stopped err=<nil>
	I0914 22:46:41.881019   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	W0914 22:46:41.881175   46713 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:41.883128   46713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-930717" ...
	I0914 22:46:37.888876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.888950   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.901522   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.389056   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:38.389140   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:38.400632   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.867426   45954 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:38.867461   45954 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:38.867487   45954 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:38.867557   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:38.898268   45954 cri.go:89] found id: ""
	I0914 22:46:38.898328   45954 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:38.914871   45954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:38.924737   45954 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:38.924785   45954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934436   45954 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934455   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.042672   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.982954   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.158791   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.235541   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.312855   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:46:40.312926   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.328687   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.842859   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.343019   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.842336   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.342351   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.665315   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.665775   46412 main.go:141] libmachine: (embed-certs-588699) Found IP for machine: 192.168.61.205
	I0914 22:46:40.665795   46412 main.go:141] libmachine: (embed-certs-588699) Reserving static IP address...
	I0914 22:46:40.665807   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has current primary IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.666273   46412 main.go:141] libmachine: (embed-certs-588699) Reserved static IP address: 192.168.61.205
	I0914 22:46:40.666316   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.666334   46412 main.go:141] libmachine: (embed-certs-588699) Waiting for SSH to be available...
	I0914 22:46:40.666375   46412 main.go:141] libmachine: (embed-certs-588699) DBG | skip adding static IP to network mk-embed-certs-588699 - found existing host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"}
	I0914 22:46:40.666401   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Getting to WaitForSSH function...
	I0914 22:46:40.668206   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668515   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.668542   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668654   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH client type: external
	I0914 22:46:40.668689   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa (-rw-------)
	I0914 22:46:40.668716   46412 main.go:141] libmachine: (embed-certs-588699) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:40.668728   46412 main.go:141] libmachine: (embed-certs-588699) DBG | About to run SSH command:
	I0914 22:46:40.668736   46412 main.go:141] libmachine: (embed-certs-588699) DBG | exit 0
	I0914 22:46:40.751202   46412 main.go:141] libmachine: (embed-certs-588699) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:40.751584   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetConfigRaw
	I0914 22:46:40.752291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:40.754685   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755054   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.755087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755318   46412 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/config.json ...
	I0914 22:46:40.755578   46412 machine.go:88] provisioning docker machine ...
	I0914 22:46:40.755603   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:40.755799   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.755940   46412 buildroot.go:166] provisioning hostname "embed-certs-588699"
	I0914 22:46:40.755959   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.756109   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.758111   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758435   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.758481   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758547   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.758686   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758798   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758983   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.759108   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.759567   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.759586   46412 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-588699 && echo "embed-certs-588699" | sudo tee /etc/hostname
	I0914 22:46:40.882559   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-588699
	
	I0914 22:46:40.882615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.885741   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.886137   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886403   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.886635   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886810   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886964   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.887176   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.887633   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.887662   46412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-588699' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-588699/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-588699' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:41.007991   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:41.008024   46412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:41.008075   46412 buildroot.go:174] setting up certificates
	I0914 22:46:41.008103   46412 provision.go:83] configureAuth start
	I0914 22:46:41.008118   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:41.008615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.011893   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012262   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.012295   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012467   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.014904   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015343   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.015378   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015551   46412 provision.go:138] copyHostCerts
	I0914 22:46:41.015605   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:41.015618   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:41.015691   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:41.015847   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:41.015864   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:41.015897   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:41.015979   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:41.015989   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:41.016019   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:41.016080   46412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.embed-certs-588699 san=[192.168.61.205 192.168.61.205 localhost 127.0.0.1 minikube embed-certs-588699]
	I0914 22:46:41.134486   46412 provision.go:172] copyRemoteCerts
	I0914 22:46:41.134537   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:41.134559   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.137472   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137789   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.137818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137995   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.138216   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.138365   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.138536   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.224196   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:41.244551   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:46:41.267745   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:41.292472   46412 provision.go:86] duration metric: configureAuth took 284.355734ms
	I0914 22:46:41.292497   46412 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:41.292668   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:41.292748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.295661   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296010   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.296042   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296246   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.296469   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296652   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296836   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.297031   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.297522   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.297556   46412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:41.609375   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:41.609417   46412 machine.go:91] provisioned docker machine in 853.82264ms
	I0914 22:46:41.609431   46412 start.go:300] post-start starting for "embed-certs-588699" (driver="kvm2")
	I0914 22:46:41.609444   46412 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:41.609472   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.609831   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:41.609890   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.613037   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613497   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.613525   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613662   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.613854   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.614023   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.614142   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.704618   46412 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:41.709759   46412 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:41.709787   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:41.709867   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:41.709991   46412 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:41.710127   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:41.721261   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:41.742359   46412 start.go:303] post-start completed in 132.913862ms
	I0914 22:46:41.742387   46412 fix.go:56] fixHost completed within 19.562130605s
	I0914 22:46:41.742418   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.745650   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.746172   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746369   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.746564   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746781   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746944   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.747138   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.747629   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.747648   46412 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:41.860006   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731601.811427748
	
	I0914 22:46:41.860030   46412 fix.go:206] guest clock: 1694731601.811427748
	I0914 22:46:41.860040   46412 fix.go:219] Guest: 2023-09-14 22:46:41.811427748 +0000 UTC Remote: 2023-09-14 22:46:41.742391633 +0000 UTC m=+142.955285980 (delta=69.036115ms)
	I0914 22:46:41.860091   46412 fix.go:190] guest clock delta is within tolerance: 69.036115ms
	I0914 22:46:41.860098   46412 start.go:83] releasing machines lock for "embed-certs-588699", held for 19.679882828s
	I0914 22:46:41.860131   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.860411   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.863136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863584   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.863618   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863721   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864206   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864398   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864477   46412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:41.864514   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.864639   46412 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:41.864666   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.867568   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.867976   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.868028   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868147   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868248   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868373   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868579   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.868691   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868833   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.868876   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.869026   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.980624   46412 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:41.986113   46412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:42.134956   46412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:42.141030   46412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:42.141101   46412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:42.158635   46412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:42.158660   46412 start.go:469] detecting cgroup driver to use...
	I0914 22:46:42.158722   46412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:42.173698   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:42.184948   46412 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:42.185007   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:42.196434   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:42.208320   46412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:42.326624   46412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:42.459498   46412 docker.go:212] disabling docker service ...
	I0914 22:46:42.459567   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:42.472479   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:42.486651   46412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:42.636161   46412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:42.739841   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:42.758562   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:42.779404   46412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:42.779472   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.787902   46412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:42.787954   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.799513   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.811428   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.823348   46412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:42.835569   46412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:42.842820   46412 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:42.842885   46412 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:42.855225   46412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:42.863005   46412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:42.979756   46412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:43.181316   46412 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:43.181384   46412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:43.191275   46412 start.go:537] Will wait 60s for crictl version
	I0914 22:46:43.191343   46412 ssh_runner.go:195] Run: which crictl
	I0914 22:46:43.196264   46412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:43.228498   46412 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:43.228589   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.281222   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.341816   46412 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:43.343277   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:43.346473   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.346835   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:43.346882   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.347084   46412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:43.351205   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:43.364085   46412 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:43.364156   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:43.400558   46412 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:43.400634   46412 ssh_runner.go:195] Run: which lz4
	I0914 22:46:43.404906   46412 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:43.409239   46412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:43.409277   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:41.885236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Start
	I0914 22:46:41.885399   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring networks are active...
	I0914 22:46:41.886125   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network default is active
	I0914 22:46:41.886511   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network mk-old-k8s-version-930717 is active
	I0914 22:46:41.886855   46713 main.go:141] libmachine: (old-k8s-version-930717) Getting domain xml...
	I0914 22:46:41.887524   46713 main.go:141] libmachine: (old-k8s-version-930717) Creating domain...
	I0914 22:46:43.317748   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting to get IP...
	I0914 22:46:43.318757   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.319197   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.319288   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.319176   47160 retry.go:31] will retry after 287.487011ms: waiting for machine to come up
	I0914 22:46:43.608890   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.609712   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.609738   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.609656   47160 retry.go:31] will retry after 289.187771ms: waiting for machine to come up
	I0914 22:46:43.900234   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.900655   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.900679   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.900576   47160 retry.go:31] will retry after 433.007483ms: waiting for machine to come up
	I0914 22:46:44.335318   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.335775   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.335804   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.335727   47160 retry.go:31] will retry after 383.295397ms: waiting for machine to come up
	I0914 22:46:44.720415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.720967   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.721001   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.720856   47160 retry.go:31] will retry after 698.454643ms: waiting for machine to come up
	I0914 22:46:45.420833   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:45.421349   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:45.421391   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:45.421297   47160 retry.go:31] will retry after 938.590433ms: waiting for machine to come up
	I0914 22:46:42.842954   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.867206   45954 api_server.go:72] duration metric: took 2.554352134s to wait for apiserver process to appear ...
	I0914 22:46:42.867238   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:46:42.867257   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.755748   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:46:46.755780   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:46:46.755832   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.873209   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:46.873243   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.373637   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.391311   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.391349   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.873646   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.880286   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.880323   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:48.373423   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:48.389682   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:46:48.415694   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:46:48.415727   45954 api_server.go:131] duration metric: took 5.548481711s to wait for apiserver health ...
	I0914 22:46:48.415739   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.415748   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.417375   45954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:46:45.238555   46412 crio.go:444] Took 1.833681 seconds to copy over tarball
	I0914 22:46:45.238634   46412 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:48.251155   46412 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012492519s)
	I0914 22:46:48.251176   46412 crio.go:451] Took 3.012596 seconds to extract the tarball
	I0914 22:46:48.251184   46412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:48.290336   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:48.338277   46412 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:48.338302   46412 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:48.338378   46412 ssh_runner.go:195] Run: crio config
	I0914 22:46:48.402542   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.402564   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.402583   46412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:48.402604   46412 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.205 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-588699 NodeName:embed-certs-588699 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:48.402791   46412 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-588699"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:48.402883   46412 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-588699 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:46:48.402958   46412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:48.414406   46412 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:48.414484   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:48.426437   46412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0914 22:46:48.445351   46412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:48.463696   46412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0914 22:46:48.481887   46412 ssh_runner.go:195] Run: grep 192.168.61.205	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:48.485825   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:48.500182   46412 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699 for IP: 192.168.61.205
	I0914 22:46:48.500215   46412 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:48.500362   46412 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:48.500417   46412 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:48.500514   46412 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/client.key
	I0914 22:46:48.500600   46412 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key.8dac69f7
	I0914 22:46:48.500726   46412 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key
	I0914 22:46:48.500885   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:48.500926   46412 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:48.500942   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:48.500976   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:48.501008   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:48.501039   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:48.501096   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:48.501918   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:48.528790   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:46:48.558557   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:48.583664   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:48.608274   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:48.631638   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:48.655163   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:48.677452   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:48.700443   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:48.724547   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:48.751559   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:48.778910   46412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:48.794369   46412 ssh_runner.go:195] Run: openssl version
	I0914 22:46:48.799778   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:48.809263   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814790   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814848   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.820454   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:48.829942   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:46.361228   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:46.361816   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:46.361846   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:46.361795   47160 retry.go:31] will retry after 1.00738994s: waiting for machine to come up
	I0914 22:46:47.370525   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:47.370964   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:47.370991   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:47.370921   47160 retry.go:31] will retry after 1.441474351s: waiting for machine to come up
	I0914 22:46:48.813921   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:48.814415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:48.814447   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:48.814362   47160 retry.go:31] will retry after 1.497562998s: waiting for machine to come up
	I0914 22:46:50.313674   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:50.314191   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:50.314221   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:50.314137   47160 retry.go:31] will retry after 1.620308161s: waiting for machine to come up
	I0914 22:46:48.418825   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:46:48.456715   45954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:46:48.496982   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:46:48.515172   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:46:48.515209   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:46:48.515223   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:46:48.515234   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:46:48.515247   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:46:48.515261   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:46:48.515272   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:46:48.515285   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:46:48.515295   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:46:48.515307   45954 system_pods.go:74] duration metric: took 18.305048ms to wait for pod list to return data ...
	I0914 22:46:48.515320   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:46:48.518842   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:46:48.518875   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:46:48.518888   45954 node_conditions.go:105] duration metric: took 3.562448ms to run NodePressure ...
	I0914 22:46:48.518908   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:50.951051   45954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.432118027s)
	I0914 22:46:50.951087   45954 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959708   45954 kubeadm.go:787] kubelet initialised
	I0914 22:46:50.959735   45954 kubeadm.go:788] duration metric: took 8.637125ms waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959745   45954 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:50.966214   45954 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.975076   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975106   45954 pod_ready.go:81] duration metric: took 8.863218ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.975118   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975129   45954 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.982438   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982471   45954 pod_ready.go:81] duration metric: took 7.330437ms waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.982485   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982493   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.991067   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991102   45954 pod_ready.go:81] duration metric: took 8.574268ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.991115   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991125   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.006696   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006732   45954 pod_ready.go:81] duration metric: took 15.595604ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.006745   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006755   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.354645   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354678   45954 pod_ready.go:81] duration metric: took 347.913938ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.354690   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354702   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.754959   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.754998   45954 pod_ready.go:81] duration metric: took 400.283619ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.755012   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.755022   45954 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:52.156253   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156299   45954 pod_ready.go:81] duration metric: took 401.260791ms waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:52.156314   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156327   45954 pod_ready.go:38] duration metric: took 1.196571114s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:52.156352   45954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:46:52.169026   45954 ops.go:34] apiserver oom_adj: -16
	I0914 22:46:52.169049   45954 kubeadm.go:640] restartCluster took 23.325317121s
	I0914 22:46:52.169059   45954 kubeadm.go:406] StartCluster complete in 23.364799998s
	I0914 22:46:52.169079   45954 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.169161   45954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:46:52.171787   45954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.172077   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:46:52.172229   45954 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:46:52.172310   45954 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172332   45954 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-799144"
	I0914 22:46:52.172325   45954 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799144"
	W0914 22:46:52.172340   45954 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:46:52.172347   45954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799144"
	I0914 22:46:52.172351   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:52.172394   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.172394   45954 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172424   45954 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.172436   45954 addons.go:240] addon metrics-server should already be in state true
	I0914 22:46:52.172500   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.173205   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173252   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173383   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173451   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173744   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173822   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.178174   45954 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-799144" context rescaled to 1 replicas
	I0914 22:46:52.178208   45954 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:46:52.180577   45954 out.go:177] * Verifying Kubernetes components...
	I0914 22:46:52.182015   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:46:52.194030   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0914 22:46:52.194040   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38817
	I0914 22:46:52.194506   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.194767   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.195059   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195078   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195219   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195235   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195420   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.195642   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.195715   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.196346   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.196392   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.198560   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0914 22:46:52.199130   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.199612   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.199641   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.199995   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.200530   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.200575   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.206536   45954 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.206558   45954 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:46:52.206584   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.206941   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.206973   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.215857   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0914 22:46:52.216266   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.216801   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.216825   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.217297   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.217484   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.220211   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40683
	I0914 22:46:52.220740   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.221296   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.221314   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.221798   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.221986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.222185   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.224162   45954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:46:52.224261   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.225483   45954 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.225494   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:46:52.225511   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.225526   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0914 22:46:52.227067   45954 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:46:52.225976   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.228337   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:46:52.228354   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:46:52.228373   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.228750   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.228764   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.228959   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229601   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.229674   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.229702   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229908   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.230068   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.230171   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.230203   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.230280   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.230503   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.232673   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233097   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.233153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.233536   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.233684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.233821   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.251500   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43473
	I0914 22:46:52.252069   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.252702   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.252722   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.253171   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.253419   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.255233   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.255574   45954 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.255591   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:46:52.255609   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.258620   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.259178   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259379   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.259584   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.259754   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.259961   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.350515   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.367291   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:46:52.367309   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:46:52.413141   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:46:52.413170   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:46:52.419647   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.462672   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:52.462698   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:46:52.519331   45954 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:46:52.519330   45954 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:52.530851   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:53.719523   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368967292s)
	I0914 22:46:53.719575   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719582   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299890259s)
	I0914 22:46:53.719616   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719638   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.719589   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720079   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720083   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720097   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720101   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720107   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720111   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720121   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720080   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720404   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720414   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720425   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720501   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720525   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720538   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720553   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720804   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720822   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.721724   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.190817165s)
	I0914 22:46:53.721771   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.721784   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.722084   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.722100   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.722089   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.722115   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.722128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.723592   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.723602   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.723614   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.723631   45954 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-799144"
	I0914 22:46:53.725666   45954 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:46:48.840421   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.179960   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.180026   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.185490   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:49.194744   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:49.205937   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210532   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210582   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.215917   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:49.225393   46412 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:49.229604   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:49.234795   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:49.239907   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:49.245153   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:49.250558   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:49.256142   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:49.261518   46412 kubeadm.go:404] StartCluster: {Name:embed-certs-588699 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:49.261618   46412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:49.261687   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:49.291460   46412 cri.go:89] found id: ""
	I0914 22:46:49.291560   46412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:49.300496   46412 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:49.300558   46412 kubeadm.go:636] restartCluster start
	I0914 22:46:49.300616   46412 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:49.309827   46412 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.311012   46412 kubeconfig.go:92] found "embed-certs-588699" server: "https://192.168.61.205:8443"
	I0914 22:46:49.313336   46412 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:49.321470   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.321528   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.332257   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.332275   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.332320   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.345427   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.846146   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.846240   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.859038   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.345492   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.345583   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.358070   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.845544   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.845605   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.861143   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.345602   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.345675   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.357406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.845964   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.846082   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.860079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.346093   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.346159   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.360952   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.845612   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.845717   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.860504   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:53.345991   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.360947   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.936297   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:51.936809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:51.936840   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:51.936747   47160 retry.go:31] will retry after 2.284330296s: waiting for machine to come up
	I0914 22:46:54.222960   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:54.223478   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:54.223530   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:54.223417   47160 retry.go:31] will retry after 3.537695113s: waiting for machine to come up
	I0914 22:46:53.726984   45954 addons.go:502] enable addons completed in 1.554762762s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:46:54.641725   45954 node_ready.go:58] node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:57.141217   45954 node_ready.go:49] node "default-k8s-diff-port-799144" has status "Ready":"True"
	I0914 22:46:57.141240   45954 node_ready.go:38] duration metric: took 4.621872993s waiting for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:57.141250   45954 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:57.151019   45954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162159   45954 pod_ready.go:92] pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace has status "Ready":"True"
	I0914 22:46:57.162180   45954 pod_ready.go:81] duration metric: took 11.133949ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162189   45954 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:53.845734   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.845815   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.858406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.346078   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.346138   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.360079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.845738   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.845801   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.861945   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.346533   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.346627   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.360445   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.845577   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.845681   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.856800   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.346374   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.346461   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.357724   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.846264   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.846376   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.857963   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.346006   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.357336   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.845877   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.845944   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.857310   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:58.345855   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.345925   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.357766   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.762315   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:57.762689   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:57.762714   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:57.762651   47160 retry.go:31] will retry after 3.773493672s: waiting for machine to come up
	I0914 22:46:59.185077   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:01.185320   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:02.912525   45407 start.go:369] acquired machines lock for "no-preload-344363" in 55.358672707s
	I0914 22:47:02.912580   45407 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:47:02.912592   45407 fix.go:54] fixHost starting: 
	I0914 22:47:02.913002   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:47:02.913035   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:47:02.932998   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0914 22:47:02.933535   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:47:02.933956   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:47:02.933977   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:47:02.934303   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:47:02.934484   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:02.934627   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:47:02.936412   45407 fix.go:102] recreateIfNeeded on no-preload-344363: state=Stopped err=<nil>
	I0914 22:47:02.936438   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	W0914 22:47:02.936601   45407 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:47:02.938235   45407 out.go:177] * Restarting existing kvm2 VM for "no-preload-344363" ...
	I0914 22:46:58.845728   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.845806   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.859436   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:59.322167   46412 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:59.322206   46412 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:59.322218   46412 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:59.322278   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:59.352268   46412 cri.go:89] found id: ""
	I0914 22:46:59.352371   46412 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:59.366742   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:59.374537   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:59.374598   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382227   46412 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382251   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:59.486171   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.268311   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.462362   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.528925   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.601616   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:00.601697   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:00.623311   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.140972   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.640574   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.141044   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.640374   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.140881   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.166662   46412 api_server.go:72] duration metric: took 2.565044214s to wait for apiserver process to appear ...
	I0914 22:47:03.166688   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:03.166703   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:01.540578   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541058   46713 main.go:141] libmachine: (old-k8s-version-930717) Found IP for machine: 192.168.72.70
	I0914 22:47:01.541095   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has current primary IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541106   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserving static IP address...
	I0914 22:47:01.541552   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserved static IP address: 192.168.72.70
	I0914 22:47:01.541579   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting for SSH to be available...
	I0914 22:47:01.541613   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.541646   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | skip adding static IP to network mk-old-k8s-version-930717 - found existing host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"}
	I0914 22:47:01.541672   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Getting to WaitForSSH function...
	I0914 22:47:01.543898   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544285   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.544317   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544428   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH client type: external
	I0914 22:47:01.544451   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa (-rw-------)
	I0914 22:47:01.544499   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:01.544518   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | About to run SSH command:
	I0914 22:47:01.544552   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | exit 0
	I0914 22:47:01.639336   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:01.639694   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetConfigRaw
	I0914 22:47:01.640324   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.642979   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643345   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.643389   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643643   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:47:01.643833   46713 machine.go:88] provisioning docker machine ...
	I0914 22:47:01.643855   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:01.644085   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644249   46713 buildroot.go:166] provisioning hostname "old-k8s-version-930717"
	I0914 22:47:01.644272   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644434   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.646429   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.646771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.646819   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.647008   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.647209   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647360   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647536   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.647737   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.648245   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.648270   46713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-930717 && echo "old-k8s-version-930717" | sudo tee /etc/hostname
	I0914 22:47:01.789438   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-930717
	
	I0914 22:47:01.789472   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.792828   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793229   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.793277   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793459   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.793644   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793778   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793953   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.794120   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.794459   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.794478   46713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-930717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-930717/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-930717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:01.928496   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:01.928536   46713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:01.928567   46713 buildroot.go:174] setting up certificates
	I0914 22:47:01.928586   46713 provision.go:83] configureAuth start
	I0914 22:47:01.928609   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.928914   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.931976   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932368   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.932398   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932542   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.934939   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935311   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.935344   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935480   46713 provision.go:138] copyHostCerts
	I0914 22:47:01.935537   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:01.935548   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:01.935620   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:01.935775   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:01.935789   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:01.935824   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:01.935970   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:01.935981   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:01.936010   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:01.936086   46713 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-930717 san=[192.168.72.70 192.168.72.70 localhost 127.0.0.1 minikube old-k8s-version-930717]
	I0914 22:47:02.167446   46713 provision.go:172] copyRemoteCerts
	I0914 22:47:02.167510   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:02.167534   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.170442   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.170862   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.170900   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.171089   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.171302   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.171496   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.171645   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.267051   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:02.289098   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 22:47:02.312189   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:02.334319   46713 provision.go:86] duration metric: configureAuth took 405.716896ms
	I0914 22:47:02.334346   46713 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:02.334555   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:47:02.334638   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.337255   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337605   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.337637   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.337949   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338100   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338240   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.338384   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.338859   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.338890   46713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:02.654307   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:02.654332   46713 machine.go:91] provisioned docker machine in 1.010485195s
	I0914 22:47:02.654345   46713 start.go:300] post-start starting for "old-k8s-version-930717" (driver="kvm2")
	I0914 22:47:02.654358   46713 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:02.654382   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.654747   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:02.654782   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.657773   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658153   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.658182   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658425   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.658630   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.658812   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.659001   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.750387   46713 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:02.754444   46713 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:02.754468   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:02.754545   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:02.754654   46713 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:02.754762   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:02.765781   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:02.788047   46713 start.go:303] post-start completed in 133.686385ms
	I0914 22:47:02.788072   46713 fix.go:56] fixHost completed within 20.927830884s
	I0914 22:47:02.788098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.791051   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791408   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.791441   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791628   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.791840   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792041   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792215   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.792383   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.792817   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.792836   46713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:02.912359   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731622.856601606
	
	I0914 22:47:02.912381   46713 fix.go:206] guest clock: 1694731622.856601606
	I0914 22:47:02.912391   46713 fix.go:219] Guest: 2023-09-14 22:47:02.856601606 +0000 UTC Remote: 2023-09-14 22:47:02.788077838 +0000 UTC m=+102.306332554 (delta=68.523768ms)
	I0914 22:47:02.912413   46713 fix.go:190] guest clock delta is within tolerance: 68.523768ms
	I0914 22:47:02.912424   46713 start.go:83] releasing machines lock for "old-k8s-version-930717", held for 21.052207532s
	I0914 22:47:02.912457   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.912730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:02.915769   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916200   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.916265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916453   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917073   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917245   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917352   46713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:02.917397   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.917535   46713 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:02.917563   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.920256   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920363   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920656   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920695   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920724   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920744   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920959   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921261   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921282   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921431   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921489   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921567   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.921635   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:03.014070   46713 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:03.047877   46713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:03.192347   46713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:03.200249   46713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:03.200324   46713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:03.215110   46713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:03.215138   46713 start.go:469] detecting cgroup driver to use...
	I0914 22:47:03.215201   46713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:03.228736   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:03.241326   46713 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:03.241377   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:03.253001   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:03.264573   46713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:03.371107   46713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:03.512481   46713 docker.go:212] disabling docker service ...
	I0914 22:47:03.512554   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:03.526054   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:03.537583   46713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:03.662087   46713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:03.793448   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:03.807574   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:03.828240   46713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0914 22:47:03.828311   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.842435   46713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:03.842490   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.856199   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.867448   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.878222   46713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:03.891806   46713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:03.899686   46713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:03.899740   46713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:03.912584   46713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:03.920771   46713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:04.040861   46713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:04.230077   46713 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:04.230147   46713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:04.235664   46713 start.go:537] Will wait 60s for crictl version
	I0914 22:47:04.235726   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:04.239737   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:04.279680   46713 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:04.279755   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.329363   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.389025   46713 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0914 22:47:02.939505   45407 main.go:141] libmachine: (no-preload-344363) Calling .Start
	I0914 22:47:02.939701   45407 main.go:141] libmachine: (no-preload-344363) Ensuring networks are active...
	I0914 22:47:02.940415   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network default is active
	I0914 22:47:02.940832   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network mk-no-preload-344363 is active
	I0914 22:47:02.941287   45407 main.go:141] libmachine: (no-preload-344363) Getting domain xml...
	I0914 22:47:02.942103   45407 main.go:141] libmachine: (no-preload-344363) Creating domain...
	I0914 22:47:04.410207   45407 main.go:141] libmachine: (no-preload-344363) Waiting to get IP...
	I0914 22:47:04.411192   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.411669   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.411744   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.411647   47373 retry.go:31] will retry after 198.435142ms: waiting for machine to come up
	I0914 22:47:04.612435   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.612957   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.613025   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.612934   47373 retry.go:31] will retry after 350.950211ms: waiting for machine to come up
	I0914 22:47:04.965570   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.966332   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.966458   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.966377   47373 retry.go:31] will retry after 398.454996ms: waiting for machine to come up
	I0914 22:47:04.390295   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:04.393815   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394249   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:04.394282   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394543   46713 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:04.398850   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:04.411297   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:47:04.411363   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:04.443950   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:04.444023   46713 ssh_runner.go:195] Run: which lz4
	I0914 22:47:04.448422   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:47:04.453479   46713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:47:04.453505   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0914 22:47:03.686086   45954 pod_ready.go:92] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.686112   45954 pod_ready.go:81] duration metric: took 6.523915685s waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.686125   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692434   45954 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.692454   45954 pod_ready.go:81] duration metric: took 6.320818ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692466   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698065   45954 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.698088   45954 pod_ready.go:81] duration metric: took 5.613243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698100   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703688   45954 pod_ready.go:92] pod "kube-proxy-j2qmv" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.703706   45954 pod_ready.go:81] duration metric: took 5.599421ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703718   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708487   45954 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.708505   45954 pod_ready.go:81] duration metric: took 4.779322ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708516   45954 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:05.993620   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:07.475579   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.475617   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:07.475631   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:07.531335   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.531366   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:08.032057   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.039350   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.039384   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:08.531559   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.538857   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.538891   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:09.031899   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:09.037891   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:47:09.047398   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:47:09.047426   46412 api_server.go:131] duration metric: took 5.880732639s to wait for apiserver health ...
	I0914 22:47:09.047434   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:47:09.047440   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:09.049137   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:05.366070   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.366812   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.366844   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.366740   47373 retry.go:31] will retry after 471.857141ms: waiting for machine to come up
	I0914 22:47:05.840519   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.841198   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.841229   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.841150   47373 retry.go:31] will retry after 632.189193ms: waiting for machine to come up
	I0914 22:47:06.475175   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:06.475769   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:06.475800   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:06.475704   47373 retry.go:31] will retry after 866.407813ms: waiting for machine to come up
	I0914 22:47:07.344343   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:07.344865   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:07.344897   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:07.344815   47373 retry.go:31] will retry after 1.101301607s: waiting for machine to come up
	I0914 22:47:08.448452   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:08.449070   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:08.449111   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:08.449014   47373 retry.go:31] will retry after 995.314765ms: waiting for machine to come up
	I0914 22:47:09.446294   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:09.446708   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:09.446740   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:09.446653   47373 retry.go:31] will retry after 1.180552008s: waiting for machine to come up
	I0914 22:47:05.984485   46713 crio.go:444] Took 1.536109 seconds to copy over tarball
	I0914 22:47:05.984562   46713 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:47:09.247825   46713 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.263230608s)
	I0914 22:47:09.247858   46713 crio.go:451] Took 3.263345 seconds to extract the tarball
	I0914 22:47:09.247871   46713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:47:09.289821   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:09.340429   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:09.340463   46713 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:09.340544   46713 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.340568   46713 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0914 22:47:09.340535   46713 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.340531   46713 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.340789   46713 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.340811   46713 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.340886   46713 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.340793   46713 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.342655   46713 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.342658   46713 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.342636   46713 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.342635   46713 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.342793   46713 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.561063   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0914 22:47:09.564079   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.564246   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.564957   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.566014   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.571757   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.578469   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.687502   46713 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0914 22:47:09.687548   46713 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0914 22:47:09.687591   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.727036   46713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0914 22:47:09.727085   46713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.727140   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0914 22:47:09.737952   46713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0914 22:47:09.737986   46713 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.737990   46713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0914 22:47:09.738002   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738013   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738023   46713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.738063   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.744728   46713 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0914 22:47:09.744768   46713 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.744813   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753014   46713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0914 22:47:09.753055   46713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.753080   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753104   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.753056   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0914 22:47:09.753149   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.753193   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.753213   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.758372   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.758544   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.875271   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0914 22:47:09.875299   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0914 22:47:09.875357   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0914 22:47:09.875382   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.875404   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0914 22:47:09.876393   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0914 22:47:09.878339   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0914 22:47:09.878491   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0914 22:47:09.881457   46713 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0914 22:47:09.881475   46713 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.881521   46713 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0914 22:47:08.496805   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.993044   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:09.050966   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:09.061912   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:09.096783   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:09.111938   46412 system_pods.go:59] 8 kube-system pods found
	I0914 22:47:09.111976   46412 system_pods.go:61] "coredns-5dd5756b68-zrd8r" [5b5f18a0-d6ee-42f2-b31a-4f8555b50388] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:09.111988   46412 system_pods.go:61] "etcd-embed-certs-588699" [b32d61b5-8c3f-4980-9f0f-c08630be9c36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:47:09.112001   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [58ac976e-7a8c-4aee-9ee5-b92bd7e897b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:47:09.112015   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [3f9587f5-fe32-446a-a4c9-cb679b177937] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:47:09.112036   46412 system_pods.go:61] "kube-proxy-l8pq9" [4aecae33-dcd9-4ec6-a537-ecbb076c44d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:47:09.112052   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [f23ab185-f4c2-4e39-936d-51d51538b0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:47:09.112066   46412 system_pods.go:61] "metrics-server-57f55c9bc5-zvk82" [3c48277c-4604-4a83-82ea-2776cf0d0537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:47:09.112077   46412 system_pods.go:61] "storage-provisioner" [f0acbbe1-c326-4863-ae2e-d2d3e5be07c1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:47:09.112090   46412 system_pods.go:74] duration metric: took 15.280254ms to wait for pod list to return data ...
	I0914 22:47:09.112103   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:09.119686   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:09.119725   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:09.119747   46412 node_conditions.go:105] duration metric: took 7.637688ms to run NodePressure ...
	I0914 22:47:09.119768   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:09.407351   46412 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414338   46412 kubeadm.go:787] kubelet initialised
	I0914 22:47:09.414361   46412 kubeadm.go:788] duration metric: took 6.974234ms waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414369   46412 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:47:09.424482   46412 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:12.171133   46412 pod_ready.go:102] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.628919   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:10.629418   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:10.629449   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:10.629366   47373 retry.go:31] will retry after 1.486310454s: waiting for machine to come up
	I0914 22:47:12.117762   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:12.118350   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:12.118381   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:12.118295   47373 retry.go:31] will retry after 2.678402115s: waiting for machine to come up
	I0914 22:47:14.798599   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:14.799127   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:14.799160   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:14.799060   47373 retry.go:31] will retry after 2.724185493s: waiting for machine to come up
	I0914 22:47:10.647242   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:12.244764   46713 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.363213143s)
	I0914 22:47:12.244798   46713 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0914 22:47:12.244823   46713 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.013457524s)
	I0914 22:47:12.244888   46713 cache_images.go:92] LoadImages completed in 2.904411161s
	W0914 22:47:12.244978   46713 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0914 22:47:12.245070   46713 ssh_runner.go:195] Run: crio config
	I0914 22:47:12.328636   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:12.328663   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:12.328687   46713 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:12.328710   46713 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-930717 NodeName:old-k8s-version-930717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 22:47:12.328882   46713 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-930717"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-930717
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:12.328984   46713 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-930717 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:12.329062   46713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0914 22:47:12.339084   46713 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:12.339169   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:12.348354   46713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 22:47:12.369083   46713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:12.388242   46713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0914 22:47:12.407261   46713 ssh_runner.go:195] Run: grep 192.168.72.70	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:12.411055   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:12.425034   46713 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717 for IP: 192.168.72.70
	I0914 22:47:12.425070   46713 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:12.425236   46713 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:12.425283   46713 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:12.425372   46713 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.key
	I0914 22:47:12.425451   46713 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key.382dacf3
	I0914 22:47:12.425512   46713 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key
	I0914 22:47:12.425642   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:12.425671   46713 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:12.425685   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:12.425708   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:12.425732   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:12.425751   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:12.425789   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:12.426339   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:12.456306   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:47:12.486038   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:12.520941   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:47:12.552007   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:12.589620   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:12.619358   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:12.650395   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:12.678898   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:12.704668   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:12.730499   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:12.755286   46713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:12.773801   46713 ssh_runner.go:195] Run: openssl version
	I0914 22:47:12.781147   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:12.793953   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799864   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799922   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.806881   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:12.817936   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:12.830758   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836538   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836613   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.843368   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:12.855592   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:12.866207   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871317   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871368   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.878438   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:12.891012   46713 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:12.895887   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:12.902284   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:12.909482   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:12.916524   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:12.924045   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:12.929935   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:12.937292   46713 kubeadm.go:404] StartCluster: {Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:12.937417   46713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:12.937470   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:12.975807   46713 cri.go:89] found id: ""
	I0914 22:47:12.975902   46713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:12.988356   46713 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:12.988379   46713 kubeadm.go:636] restartCluster start
	I0914 22:47:12.988434   46713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:13.000294   46713 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.001492   46713 kubeconfig.go:92] found "old-k8s-version-930717" server: "https://192.168.72.70:8443"
	I0914 22:47:13.008583   46713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:13.023004   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.023065   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.037604   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.037625   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.037671   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.048939   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.549653   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.549746   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.561983   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.049481   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.049588   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.064694   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.549101   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.549195   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.564858   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:15.049112   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.049206   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.063428   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:12.993654   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:14.995358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:13.946979   46412 pod_ready.go:92] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:13.947004   46412 pod_ready.go:81] duration metric: took 4.522495708s waiting for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:13.947013   46412 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:15.968061   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:18.465595   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:17.526472   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:17.526915   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:17.526946   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:17.526867   47373 retry.go:31] will retry after 3.587907236s: waiting for machine to come up
	I0914 22:47:15.549179   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.549273   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.561977   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.049593   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.049678   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.063654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.549178   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.549248   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.561922   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.049041   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.049131   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.062442   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.550005   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.550066   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.561254   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.049855   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.049932   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.062226   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.549845   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.549941   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.561219   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.049739   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.049829   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.061225   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.550035   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.550112   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.561546   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:20.049979   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.050080   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.061478   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.489830   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:19.490802   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.490931   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.118871   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119369   45407 main.go:141] libmachine: (no-preload-344363) Found IP for machine: 192.168.39.60
	I0914 22:47:21.119391   45407 main.go:141] libmachine: (no-preload-344363) Reserving static IP address...
	I0914 22:47:21.119418   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has current primary IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119860   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.119888   45407 main.go:141] libmachine: (no-preload-344363) Reserved static IP address: 192.168.39.60
	I0914 22:47:21.119906   45407 main.go:141] libmachine: (no-preload-344363) DBG | skip adding static IP to network mk-no-preload-344363 - found existing host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"}
	I0914 22:47:21.119931   45407 main.go:141] libmachine: (no-preload-344363) DBG | Getting to WaitForSSH function...
	I0914 22:47:21.119949   45407 main.go:141] libmachine: (no-preload-344363) Waiting for SSH to be available...
	I0914 22:47:21.121965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122282   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.122312   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122392   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH client type: external
	I0914 22:47:21.122429   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa (-rw-------)
	I0914 22:47:21.122482   45407 main.go:141] libmachine: (no-preload-344363) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:21.122510   45407 main.go:141] libmachine: (no-preload-344363) DBG | About to run SSH command:
	I0914 22:47:21.122521   45407 main.go:141] libmachine: (no-preload-344363) DBG | exit 0
	I0914 22:47:21.206981   45407 main.go:141] libmachine: (no-preload-344363) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:21.207366   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetConfigRaw
	I0914 22:47:21.208066   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.210323   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210607   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.210639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210795   45407 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/config.json ...
	I0914 22:47:21.211016   45407 machine.go:88] provisioning docker machine ...
	I0914 22:47:21.211036   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:21.211258   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211431   45407 buildroot.go:166] provisioning hostname "no-preload-344363"
	I0914 22:47:21.211455   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211629   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.213574   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.213887   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.213921   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.214015   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.214181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214338   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.214648   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.215041   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.215056   45407 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-344363 && echo "no-preload-344363" | sudo tee /etc/hostname
	I0914 22:47:21.347323   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-344363
	
	I0914 22:47:21.347358   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.350445   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.350846   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.350882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.351144   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.351393   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351599   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351766   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.351944   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.352264   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.352291   45407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-344363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-344363/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-344363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:21.471619   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:21.471648   45407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:21.471671   45407 buildroot.go:174] setting up certificates
	I0914 22:47:21.471683   45407 provision.go:83] configureAuth start
	I0914 22:47:21.471696   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.472019   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.474639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475113   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.475141   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475293   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.477627   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.477976   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.478009   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.478148   45407 provision.go:138] copyHostCerts
	I0914 22:47:21.478189   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:21.478198   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:21.478249   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:21.478336   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:21.478344   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:21.478362   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:21.478416   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:21.478423   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:21.478439   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:21.478482   45407 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.no-preload-344363 san=[192.168.39.60 192.168.39.60 localhost 127.0.0.1 minikube no-preload-344363]
	I0914 22:47:21.546956   45407 provision.go:172] copyRemoteCerts
	I0914 22:47:21.547006   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:21.547029   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.549773   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550217   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.550257   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550468   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.550683   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.550850   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.551050   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:21.635939   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:21.656944   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:47:21.679064   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:21.701127   45407 provision.go:86] duration metric: configureAuth took 229.434247ms
	I0914 22:47:21.701147   45407 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:21.701319   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:47:21.701381   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.704100   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704475   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.704512   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704672   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.704865   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705046   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705218   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.705382   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.705828   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.705849   45407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:22.037291   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:22.037337   45407 machine.go:91] provisioned docker machine in 826.295956ms
	I0914 22:47:22.037350   45407 start.go:300] post-start starting for "no-preload-344363" (driver="kvm2")
	I0914 22:47:22.037363   45407 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:22.037396   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.037704   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:22.037729   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.040372   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040729   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.040757   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040896   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.041082   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.041266   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.041373   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.129612   45407 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:22.133522   45407 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:22.133550   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:22.133625   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:22.133715   45407 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:22.133844   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:22.142411   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:22.165470   45407 start.go:303] post-start completed in 128.106418ms
	I0914 22:47:22.165496   45407 fix.go:56] fixHost completed within 19.252903923s
	I0914 22:47:22.165524   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.168403   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168696   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.168731   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168894   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.169095   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169248   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169384   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.169571   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:22.169891   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:22.169904   45407 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:22.284038   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731642.258576336
	
	I0914 22:47:22.284062   45407 fix.go:206] guest clock: 1694731642.258576336
	I0914 22:47:22.284071   45407 fix.go:219] Guest: 2023-09-14 22:47:22.258576336 +0000 UTC Remote: 2023-09-14 22:47:22.16550191 +0000 UTC m=+357.203571663 (delta=93.074426ms)
	I0914 22:47:22.284107   45407 fix.go:190] guest clock delta is within tolerance: 93.074426ms
	I0914 22:47:22.284117   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 19.371563772s
	I0914 22:47:22.284146   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.284388   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:22.286809   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287091   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.287133   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287288   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287782   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287978   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.288050   45407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:22.288085   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.288176   45407 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:22.288197   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.290608   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.290936   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.290965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291067   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291157   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291345   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291516   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.291529   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.291554   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291649   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.291706   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291837   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291975   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.292158   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.417570   45407 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:22.423145   45407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:22.563752   45407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:22.569625   45407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:22.569718   45407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:22.585504   45407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:22.585527   45407 start.go:469] detecting cgroup driver to use...
	I0914 22:47:22.585610   45407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:22.599600   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:22.612039   45407 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:22.612080   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:22.624817   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:22.637141   45407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:22.744181   45407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:22.864420   45407 docker.go:212] disabling docker service ...
	I0914 22:47:22.864490   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:22.877360   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:22.888786   45407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:23.000914   45407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:23.137575   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:23.150682   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:23.167898   45407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:47:23.167966   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.176916   45407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:23.176991   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.185751   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.195260   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.204852   45407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:23.214303   45407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:23.222654   45407 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:23.222717   45407 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:23.235654   45407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:23.244081   45407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:23.357943   45407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:23.521315   45407 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:23.521410   45407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:23.526834   45407 start.go:537] Will wait 60s for crictl version
	I0914 22:47:23.526889   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:23.530250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:23.562270   45407 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:23.562358   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.606666   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.658460   45407 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:47:20.467600   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:20.964310   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.964331   46412 pod_ready.go:81] duration metric: took 7.017312906s waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.964349   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968539   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.968555   46412 pod_ready.go:81] duration metric: took 4.200242ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968563   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973180   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.973194   46412 pod_ready.go:81] duration metric: took 4.625123ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973206   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977403   46412 pod_ready.go:92] pod "kube-proxy-l8pq9" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.977418   46412 pod_ready.go:81] duration metric: took 4.206831ms waiting for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977425   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375236   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:22.375259   46412 pod_ready.go:81] duration metric: took 1.397826525s waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375271   46412 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:23.659885   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:23.662745   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663195   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:23.663228   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663452   45407 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:23.667637   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:23.678881   45407 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:47:23.678929   45407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:23.708267   45407 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:47:23.708309   45407 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:23.708390   45407 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.708421   45407 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.708424   45407 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0914 22:47:23.708437   45407 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.708425   45407 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.708537   45407 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.708403   45407 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.708393   45407 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.709903   45407 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.709887   45407 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.709899   45407 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.710189   45407 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.710260   45407 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0914 22:47:23.710346   45407 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.917134   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.929080   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.929396   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0914 22:47:23.935684   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.936236   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.937239   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.937622   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.006429   45407 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0914 22:47:24.006479   45407 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.006524   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.102547   45407 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0914 22:47:24.102597   45407 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.102641   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201012   45407 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0914 22:47:24.201050   45407 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.201100   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201106   45407 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0914 22:47:24.201138   45407 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.201156   45407 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0914 22:47:24.201203   45407 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.201227   45407 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0914 22:47:24.201282   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.201294   45407 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.201329   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201236   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201180   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.206295   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.263389   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0914 22:47:24.263451   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.263501   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.263513   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:24.263534   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0914 22:47:24.263573   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.263665   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.273844   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0914 22:47:24.273932   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:24.338823   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0914 22:47:24.338944   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:24.344560   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0914 22:47:24.344580   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0914 22:47:24.344594   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344635   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344659   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:24.344678   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0914 22:47:24.344723   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0914 22:47:24.344745   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0914 22:47:24.344816   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:24.346975   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0914 22:47:24.953835   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:20.549479   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.549585   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.563121   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.049732   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.049807   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.061447   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.549012   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.549073   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.561653   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.049517   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.049582   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.062280   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.549943   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.550017   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.562654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:23.024019   46713 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:23.024043   46713 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:23.024054   46713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:23.024101   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:23.060059   46713 cri.go:89] found id: ""
	I0914 22:47:23.060116   46713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:23.078480   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:23.087665   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:23.087714   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096513   46713 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096535   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:23.205072   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.081881   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.285041   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.364758   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.468127   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:24.468201   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:24.483354   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.007133   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.507231   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:23.992945   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.492600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:24.475872   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.978889   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.317110   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.97244294s)
	I0914 22:47:26.317145   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0914 22:47:26.317167   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317174   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.972489589s)
	I0914 22:47:26.317202   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0914 22:47:26.317215   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317248   45407 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.363386448s)
	I0914 22:47:26.317281   45407 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 22:47:26.317319   45407 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.317366   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:26.317213   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.972376756s)
	I0914 22:47:26.317426   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0914 22:47:28.397989   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (2.080744487s)
	I0914 22:47:28.398021   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0914 22:47:28.398031   45407 ssh_runner.go:235] Completed: which crictl: (2.080647539s)
	I0914 22:47:28.398048   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398093   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398095   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.006554   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:26.032232   46713 api_server.go:72] duration metric: took 1.564104415s to wait for apiserver process to appear ...
	I0914 22:47:26.032255   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:26.032270   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:28.992292   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.490442   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.033000   46713 api_server.go:269] stopped: https://192.168.72.70:8443/healthz: Get "https://192.168.72.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 22:47:31.033044   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:31.568908   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:31.568937   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:32.069915   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.080424   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.080456   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:32.570110   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.580879   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.580918   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:33.069247   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:33.077664   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:47:33.086933   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:47:33.086960   46713 api_server.go:131] duration metric: took 7.054699415s to wait for apiserver health ...
	I0914 22:47:33.086973   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:33.086981   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:33.088794   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:29.476304   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.975459   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:30.974281   45407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.57612291s)
	I0914 22:47:30.974347   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 22:47:30.974381   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.576263058s)
	I0914 22:47:30.974403   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0914 22:47:30.974427   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:30.974455   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:30.974470   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:33.737309   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.762815322s)
	I0914 22:47:33.737355   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0914 22:47:33.737379   45407 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.737322   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.762844826s)
	I0914 22:47:33.737464   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 22:47:33.737436   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.090357   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:33.103371   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:33.123072   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:33.133238   46713 system_pods.go:59] 7 kube-system pods found
	I0914 22:47:33.133268   46713 system_pods.go:61] "coredns-5644d7b6d9-8sbjk" [638464d2-96db-460d-bf82-0ee79df816da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:33.133278   46713 system_pods.go:61] "etcd-old-k8s-version-930717" [4b38f48a-fc4a-43d5-a2b4-414aff712c1b] Running
	I0914 22:47:33.133286   46713 system_pods.go:61] "kube-apiserver-old-k8s-version-930717" [523a3adc-8c68-4980-8a53-133476ce2488] Running
	I0914 22:47:33.133294   46713 system_pods.go:61] "kube-controller-manager-old-k8s-version-930717" [36fd7e01-4a5d-446f-8370-f7a7e886571c] Running
	I0914 22:47:33.133306   46713 system_pods.go:61] "kube-proxy-l4qz4" [c61d0471-0a9e-4662-b723-39944c8b3c31] Running
	I0914 22:47:33.133314   46713 system_pods.go:61] "kube-scheduler-old-k8s-version-930717" [f6d45807-c7f2-4545-b732-45dbd945c660] Running
	I0914 22:47:33.133323   46713 system_pods.go:61] "storage-provisioner" [2956bea1-80f8-4f61-a635-4332d4e3042e] Running
	I0914 22:47:33.133331   46713 system_pods.go:74] duration metric: took 10.233824ms to wait for pod list to return data ...
	I0914 22:47:33.133343   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:33.137733   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:33.137765   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:33.137776   46713 node_conditions.go:105] duration metric: took 4.42667ms to run NodePressure ...
	I0914 22:47:33.137795   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:33.590921   46713 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:33.597720   46713 retry.go:31] will retry after 159.399424ms: kubelet not initialised
	I0914 22:47:33.767747   46713 retry.go:31] will retry after 191.717885ms: kubelet not initialised
	I0914 22:47:33.967120   46713 retry.go:31] will retry after 382.121852ms: kubelet not initialised
	I0914 22:47:34.354106   46713 retry.go:31] will retry after 1.055800568s: kubelet not initialised
	I0914 22:47:35.413704   46713 retry.go:31] will retry after 1.341728619s: kubelet not initialised
	I0914 22:47:33.993188   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.491280   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:34.475254   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.977175   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.760804   46713 retry.go:31] will retry after 2.668611083s: kubelet not initialised
	I0914 22:47:39.434688   46713 retry.go:31] will retry after 2.1019007s: kubelet not initialised
	I0914 22:47:38.994051   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.490913   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:38.998980   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.474686   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:40.530763   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.793268381s)
	I0914 22:47:40.530793   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0914 22:47:40.530820   45407 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:40.530881   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:41.888277   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.357355595s)
	I0914 22:47:41.888305   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0914 22:47:41.888338   45407 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:41.888405   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:42.537191   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 22:47:42.537244   45407 cache_images.go:123] Successfully loaded all cached images
	I0914 22:47:42.537251   45407 cache_images.go:92] LoadImages completed in 18.828927203s
	I0914 22:47:42.537344   45407 ssh_runner.go:195] Run: crio config
	I0914 22:47:42.594035   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:47:42.594056   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:42.594075   45407 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:42.594098   45407 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-344363 NodeName:no-preload-344363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:47:42.594272   45407 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-344363"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:42.594383   45407 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-344363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:42.594449   45407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:47:42.604172   45407 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:42.604243   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:42.612570   45407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0914 22:47:42.628203   45407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:42.643625   45407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0914 22:47:42.658843   45407 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:42.661922   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:42.672252   45407 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363 for IP: 192.168.39.60
	I0914 22:47:42.672279   45407 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:42.672420   45407 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:42.672462   45407 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:42.672536   45407 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.key
	I0914 22:47:42.672630   45407 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key.a014e791
	I0914 22:47:42.672693   45407 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key
	I0914 22:47:42.672828   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:42.672867   45407 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:42.672879   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:42.672915   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:42.672948   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:42.672982   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:42.673044   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:42.673593   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:42.695080   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:47:42.716844   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:42.746475   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0914 22:47:42.769289   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:42.790650   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:42.811665   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:42.833241   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:42.853851   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:42.875270   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:42.896913   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:42.917370   45407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:42.934549   45407 ssh_runner.go:195] Run: openssl version
	I0914 22:47:42.939762   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:42.949829   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954155   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954204   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.959317   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:42.968463   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:42.979023   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983436   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983502   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.988655   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:42.998288   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:43.007767   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011865   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011940   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.016837   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:43.026372   45407 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:43.030622   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:43.036026   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:43.041394   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:43.046608   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:43.051675   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:43.056621   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:43.061552   45407 kubeadm.go:404] StartCluster: {Name:no-preload-344363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:43.061645   45407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:43.061700   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:43.090894   45407 cri.go:89] found id: ""
	I0914 22:47:43.090957   45407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:43.100715   45407 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:43.100732   45407 kubeadm.go:636] restartCluster start
	I0914 22:47:43.100782   45407 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:43.109233   45407 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.110217   45407 kubeconfig.go:92] found "no-preload-344363" server: "https://192.168.39.60:8443"
	I0914 22:47:43.112442   45407 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:43.120580   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.120619   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.131224   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.131238   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.131292   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.140990   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.641661   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.641753   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.653379   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.142002   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.142077   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.154194   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.641806   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.641931   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.653795   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:41.541334   46713 retry.go:31] will retry after 2.553142131s: kubelet not initialised
	I0914 22:47:44.100647   46713 retry.go:31] will retry after 6.538244211s: kubelet not initialised
	I0914 22:47:43.995757   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.490438   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:43.974300   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.474137   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:45.141728   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.141816   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.153503   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:45.641693   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.641775   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.653204   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.141748   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.141838   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.153035   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.641294   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.641386   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.653144   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.141813   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.141915   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.152408   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.641793   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.641872   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.653228   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.141212   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.141304   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.152568   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.641805   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.641881   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.652184   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.141839   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.141909   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.152921   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.642082   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.642160   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.656837   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.991209   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:51.492672   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:48.973567   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.974964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:52.975525   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.141324   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.141399   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.153003   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:50.642032   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.642113   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.653830   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.141403   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.141486   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.152324   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.641932   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.642027   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.653279   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.141928   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.141998   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.152653   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.641151   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.641239   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.652312   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:53.121389   45407 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:53.121422   45407 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:53.121436   45407 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:53.121511   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:53.150615   45407 cri.go:89] found id: ""
	I0914 22:47:53.150681   45407 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:53.164511   45407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:53.173713   45407 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:53.173778   45407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183776   45407 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183797   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:53.310974   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.230246   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.409237   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.474183   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.572433   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:54.572581   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:54.584938   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:50.644922   46713 retry.go:31] will retry after 11.248631638s: kubelet not initialised
	I0914 22:47:53.990630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.990661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.475037   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:57.475941   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.098638   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:55.599218   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.099188   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.598826   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.621701   45407 api_server.go:72] duration metric: took 2.049267478s to wait for apiserver process to appear ...
	I0914 22:47:56.621729   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:56.621749   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622263   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:56.622301   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622682   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:57.123404   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.433050   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:48:00.433082   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:48:00.433096   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.467030   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.467073   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:00.623319   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.633882   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.633912   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.123559   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.128661   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.128691   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.623201   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.629775   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.629804   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:02.123439   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:02.131052   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:48:02.141185   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:48:02.141213   45407 api_server.go:131] duration metric: took 5.519473898s to wait for apiserver health ...
	I0914 22:48:02.141222   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:48:02.141228   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:48:02.143254   45407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:57.992038   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:59.992600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:02.144756   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:48:02.158230   45407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:48:02.182382   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:48:02.204733   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:48:02.204786   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:48:02.204801   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:48:02.204817   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:48:02.204834   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:48:02.204847   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:48:02.204859   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:48:02.204876   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:48:02.204887   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:48:02.204900   45407 system_pods.go:74] duration metric: took 22.491699ms to wait for pod list to return data ...
	I0914 22:48:02.204913   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:48:02.208661   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:48:02.208692   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:48:02.208706   45407 node_conditions.go:105] duration metric: took 3.7844ms to run NodePressure ...
	I0914 22:48:02.208731   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:48:02.454257   45407 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458848   45407 kubeadm.go:787] kubelet initialised
	I0914 22:48:02.458868   45407 kubeadm.go:788] duration metric: took 4.585034ms waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458874   45407 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:02.464634   45407 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.471350   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471371   45407 pod_ready.go:81] duration metric: took 6.714087ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.471379   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471387   45407 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.476977   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.476998   45407 pod_ready.go:81] duration metric: took 5.604627ms waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.477009   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.477019   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.483218   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483236   45407 pod_ready.go:81] duration metric: took 6.211697ms waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.483244   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483256   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.589184   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589217   45407 pod_ready.go:81] duration metric: took 105.950074ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.589227   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589236   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.987051   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987081   45407 pod_ready.go:81] duration metric: took 397.836385ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.987094   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987103   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.392835   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392865   45407 pod_ready.go:81] duration metric: took 405.754351ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.392876   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392886   45407 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.786615   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786641   45407 pod_ready.go:81] duration metric: took 393.746366ms waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.786652   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786660   45407 pod_ready.go:38] duration metric: took 1.327778716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:03.786676   45407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:48:03.798081   45407 ops.go:34] apiserver oom_adj: -16
	I0914 22:48:03.798101   45407 kubeadm.go:640] restartCluster took 20.697363165s
	I0914 22:48:03.798107   45407 kubeadm.go:406] StartCluster complete in 20.736562339s
	I0914 22:48:03.798121   45407 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.798193   45407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:48:03.799954   45407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.800200   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:48:03.800299   45407 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:48:03.800368   45407 addons.go:69] Setting storage-provisioner=true in profile "no-preload-344363"
	I0914 22:48:03.800449   45407 addons.go:231] Setting addon storage-provisioner=true in "no-preload-344363"
	W0914 22:48:03.800462   45407 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:48:03.800511   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800394   45407 addons.go:69] Setting metrics-server=true in profile "no-preload-344363"
	I0914 22:48:03.800543   45407 addons.go:231] Setting addon metrics-server=true in "no-preload-344363"
	W0914 22:48:03.800558   45407 addons.go:240] addon metrics-server should already be in state true
	I0914 22:48:03.800590   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800388   45407 addons.go:69] Setting default-storageclass=true in profile "no-preload-344363"
	I0914 22:48:03.800633   45407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-344363"
	I0914 22:48:03.800411   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:48:03.800906   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800909   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800944   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.801011   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.801054   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.800968   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.804911   45407 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-344363" context rescaled to 1 replicas
	I0914 22:48:03.804946   45407 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:48:03.807503   45407 out.go:177] * Verifying Kubernetes components...
	I0914 22:47:59.973913   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:01.974625   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:03.808768   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:48:03.816774   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0914 22:48:03.816773   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0914 22:48:03.817265   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817518   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817791   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.817821   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818011   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.818032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818223   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818407   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818431   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.818976   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.819027   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.829592   45407 addons.go:231] Setting addon default-storageclass=true in "no-preload-344363"
	W0914 22:48:03.829614   45407 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:48:03.829641   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.830013   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.830047   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.835514   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0914 22:48:03.835935   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.836447   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.836473   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.836841   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.837011   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.838909   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.843677   45407 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:48:03.845231   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:48:03.845246   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:48:03.845261   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.844291   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I0914 22:48:03.845685   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.846224   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.846242   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.846572   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.847073   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.847103   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.847332   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0914 22:48:03.848400   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.848666   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849160   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.849182   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.849263   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.849283   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849314   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.849461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.849570   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.849635   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.849682   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.850555   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.850585   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.863035   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0914 22:48:03.863559   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864010   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.864204   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0914 22:48:03.864478   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.864526   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864752   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.864936   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864955   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.865261   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.865489   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.866474   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.868300   45407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:48:03.867504   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.869841   45407 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:03.869855   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:48:03.869874   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.870067   45407 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:03.870078   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:48:03.870091   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.873462   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.873859   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.873882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874026   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874114   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.874287   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.874397   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.874903   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874949   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.874980   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.875135   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.875301   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.875486   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.956934   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:48:03.956956   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:48:03.973872   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:48:03.973896   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:48:04.002028   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.002051   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:48:04.018279   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:04.037990   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:04.047125   45407 node_ready.go:35] waiting up to 6m0s for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:04.047292   45407 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:48:04.086299   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.991926   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.991952   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992225   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992292   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992324   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992342   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992364   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992614   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992634   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992649   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992657   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992665   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992914   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992933   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:01.898769   46713 retry.go:31] will retry after 9.475485234s: kubelet not initialised
	I0914 22:48:05.528027   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.490009157s)
	I0914 22:48:05.528078   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528087   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528435   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528457   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528470   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528436   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.528481   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528802   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528824   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528829   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.600274   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.51392997s)
	I0914 22:48:05.600338   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600351   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.600645   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.600670   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.600682   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600695   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.602502   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.602513   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.602524   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.602546   45407 addons.go:467] Verifying addon metrics-server=true in "no-preload-344363"
	I0914 22:48:05.604330   45407 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 22:48:02.491577   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.995014   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.474529   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:06.474964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:05.605648   45407 addons.go:502] enable addons completed in 1.805353931s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 22:48:06.198114   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:08.199023   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:07.490770   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:09.991693   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:08.974469   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:11.474711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:10.698198   45407 node_ready.go:49] node "no-preload-344363" has status "Ready":"True"
	I0914 22:48:10.698218   45407 node_ready.go:38] duration metric: took 6.651066752s waiting for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:10.698227   45407 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:10.704694   45407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710103   45407 pod_ready.go:92] pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:10.710119   45407 pod_ready.go:81] duration metric: took 5.400404ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710128   45407 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.747445   45407 pod_ready.go:102] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.229927   45407 pod_ready.go:92] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:13.229953   45407 pod_ready.go:81] duration metric: took 2.519818297s waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:13.229966   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747126   45407 pod_ready.go:92] pod "kube-apiserver-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.747147   45407 pod_ready.go:81] duration metric: took 1.51717338s waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747157   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752397   45407 pod_ready.go:92] pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.752413   45407 pod_ready.go:81] duration metric: took 5.250049ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752420   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.380752   46713 kubeadm.go:787] kubelet initialised
	I0914 22:48:11.380783   46713 kubeadm.go:788] duration metric: took 37.789831498s waiting for restarted kubelet to initialise ...
	I0914 22:48:11.380793   46713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:11.386189   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392948   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.392970   46713 pod_ready.go:81] duration metric: took 6.75113ms waiting for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392981   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398606   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.398627   46713 pod_ready.go:81] duration metric: took 5.638835ms waiting for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398639   46713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404145   46713 pod_ready.go:92] pod "etcd-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.404174   46713 pod_ready.go:81] duration metric: took 5.527173ms waiting for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404187   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409428   46713 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.409448   46713 pod_ready.go:81] duration metric: took 5.252278ms waiting for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409461   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779225   46713 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.779252   46713 pod_ready.go:81] duration metric: took 369.782336ms waiting for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779267   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179256   46713 pod_ready.go:92] pod "kube-proxy-l4qz4" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.179277   46713 pod_ready.go:81] duration metric: took 400.003039ms waiting for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179286   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578889   46713 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.578921   46713 pod_ready.go:81] duration metric: took 399.627203ms waiting for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578935   46713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:12.491274   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:14.991146   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.991799   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.974725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.473917   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.474722   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:15.099588   45407 pod_ready.go:92] pod "kube-proxy-zzkbp" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.099612   45407 pod_ready.go:81] duration metric: took 347.18498ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.099623   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498642   45407 pod_ready.go:92] pod "kube-scheduler-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.498664   45407 pod_ready.go:81] duration metric: took 399.034277ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498678   45407 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:17.806138   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.887157   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:19.390361   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.991911   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.993133   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.474578   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.305450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:22.305521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:24.306131   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:21.885143   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.886722   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.490126   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.991185   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.974547   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.473850   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.805651   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.306125   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.384992   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.385266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.385877   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:27.991827   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.991995   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.475603   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.974568   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:31.806483   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.306121   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.886341   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.385506   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.488948   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.490950   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.989621   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.474815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.973407   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.806806   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.806988   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.886043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.386865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.991151   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:41.491384   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:39.974109   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.473010   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.808362   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.305126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.886094   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.386710   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.991121   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.992500   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:44.475120   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:46.973837   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.305212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.305740   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.806334   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.886380   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.887578   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:48.490416   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:50.990196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.474209   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.474657   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.808853   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.305742   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.888488   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.385591   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:52.990333   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.991549   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:53.974301   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:55.976250   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.474372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.807759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.304597   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.885164   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.885809   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:57.491267   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.492043   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.991231   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:00.974064   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:02.975136   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.808275   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.385492   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.385865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:05.386266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.992513   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.490253   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:04.975537   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.473413   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.306066   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.805711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.886495   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.386100   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.995545   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.490960   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:09.476367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.974480   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.807870   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.306759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:12.386166   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.990090   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.489864   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.975102   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.474761   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.475314   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:15.809041   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.305700   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:17.385490   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:19.386201   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.490727   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.493813   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.973383   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.973978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.306906   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.805781   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.806417   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:21.387171   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:23.394663   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.989981   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.998602   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.975048   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.473804   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.805993   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:25.886256   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:28.385307   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:30.386473   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.490860   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.991665   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.992373   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.475815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.973092   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.305648   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.806797   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.886577   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.386203   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.490086   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:36.490465   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:33.973662   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.974041   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.473275   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.306848   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.806295   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.388154   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.886447   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.490850   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.989734   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.473543   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.473711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:41.807197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.305572   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.385788   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.386844   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.995794   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:45.490630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.474251   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.974425   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.306070   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.805530   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.886095   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.888504   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:47.491269   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.990921   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.474354   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.973552   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:50.806526   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.807021   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.385411   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.385825   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.490166   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:54.991982   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.974372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:56.473350   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.305863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.306450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.308315   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.886560   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.886950   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.386043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.490604   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.490811   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.993715   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:58.973152   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.975078   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.474589   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.806409   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.806552   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:02.387458   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.886066   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.490551   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:06.490632   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.974290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.974714   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.810256   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.305443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.386252   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:09.887808   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.490994   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.990417   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.474207   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.973759   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.305662   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.807626   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.385387   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.386055   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.991196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.489856   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.974362   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.474890   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.305348   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.306521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.306661   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:16.386682   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:18.386805   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.491969   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.990884   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.991904   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.476052   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.973290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.806863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.810113   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:20.886118   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.388653   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:24.490861   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.991437   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.474556   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.307894   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.809126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:25.885409   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:27.886080   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.386151   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:29.489358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.491041   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.973725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.975342   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.474590   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.306171   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.307126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:32.386190   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:34.886414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.491383   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.492155   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.974978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:38.473506   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.307221   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.806174   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.386235   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.886579   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.990447   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.991649   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.474117   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.973778   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.308130   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.806411   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.807765   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.385199   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.387102   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.491019   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.993076   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.974689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.473863   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.305509   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.305825   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:46.885280   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.385189   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.491661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.989457   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.991512   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.973709   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.976112   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.306459   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.805441   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.386498   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.887424   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.492074   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.989668   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.473073   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.473689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.474597   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:55.806711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.305434   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.386640   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.885298   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.995348   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:01.491262   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.974371   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.474367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.305803   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.806120   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:04.807184   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.886357   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.887274   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:05.386976   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.708637   45954 pod_ready.go:81] duration metric: took 4m0.000105295s waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:03.708672   45954 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:03.708681   45954 pod_ready.go:38] duration metric: took 4m6.567418041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:03.708699   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:51:03.708739   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:03.708804   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:03.759664   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:03.759688   45954 cri.go:89] found id: ""
	I0914 22:51:03.759697   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:03.759753   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.764736   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:03.764789   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:03.800251   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:03.800280   45954 cri.go:89] found id: ""
	I0914 22:51:03.800290   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:03.800341   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.804761   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:03.804818   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:03.847136   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:03.847162   45954 cri.go:89] found id: ""
	I0914 22:51:03.847172   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:03.847215   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.851253   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:03.851325   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:03.882629   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:03.882654   45954 cri.go:89] found id: ""
	I0914 22:51:03.882664   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:03.882713   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.887586   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:03.887642   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:03.916702   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:03.916723   45954 cri.go:89] found id: ""
	I0914 22:51:03.916730   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:03.916773   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.921172   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:03.921232   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:03.950593   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:03.950618   45954 cri.go:89] found id: ""
	I0914 22:51:03.950628   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:03.950689   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.954303   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:03.954366   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:03.982565   45954 cri.go:89] found id: ""
	I0914 22:51:03.982588   45954 logs.go:284] 0 containers: []
	W0914 22:51:03.982597   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:03.982604   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:03.982662   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:04.011932   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.011957   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:04.011964   45954 cri.go:89] found id: ""
	I0914 22:51:04.011972   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:04.012026   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.016091   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.019830   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:04.019852   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:04.061469   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:04.061494   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:04.092823   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:04.092846   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:04.156150   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:04.156190   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:04.169879   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:04.169920   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:04.226165   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:04.226198   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.255658   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:04.255692   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:04.299368   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:04.299401   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:04.440433   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:04.440467   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:04.477396   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:04.477425   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:04.513399   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:04.513431   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:05.016889   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:05.016925   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:05.067712   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:05.067749   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:05.973423   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.973637   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.307754   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.805419   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.389465   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.885150   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.597529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:51:07.614053   45954 api_server.go:72] duration metric: took 4m15.435815174s to wait for apiserver process to appear ...
	I0914 22:51:07.614076   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:51:07.614106   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:07.614155   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:07.643309   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:07.643333   45954 cri.go:89] found id: ""
	I0914 22:51:07.643342   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:07.643411   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.647434   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:07.647511   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:07.676943   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:07.676959   45954 cri.go:89] found id: ""
	I0914 22:51:07.676966   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:07.677006   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.681053   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:07.681101   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:07.714710   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:07.714736   45954 cri.go:89] found id: ""
	I0914 22:51:07.714745   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:07.714807   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.718900   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:07.718966   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:07.754786   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:07.754808   45954 cri.go:89] found id: ""
	I0914 22:51:07.754815   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:07.754867   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.759623   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:07.759693   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:07.794366   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:07.794389   45954 cri.go:89] found id: ""
	I0914 22:51:07.794398   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:07.794457   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.798717   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:07.798777   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:07.831131   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:07.831158   45954 cri.go:89] found id: ""
	I0914 22:51:07.831167   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:07.831227   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.835696   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:07.835762   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:07.865802   45954 cri.go:89] found id: ""
	I0914 22:51:07.865831   45954 logs.go:284] 0 containers: []
	W0914 22:51:07.865841   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:07.865849   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:07.865905   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:07.895025   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:07.895049   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:07.895056   45954 cri.go:89] found id: ""
	I0914 22:51:07.895064   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:07.895118   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.899230   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.903731   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:07.903751   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:08.033922   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:08.033952   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:08.068784   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:08.068812   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:08.120395   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:08.120428   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:08.133740   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:08.133763   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:08.173288   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:08.173320   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:08.203964   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:08.203988   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:08.732327   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:08.732367   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:08.784110   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:08.784150   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:08.819179   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:08.819210   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:08.866612   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:08.866644   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:08.900892   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:08.900939   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:08.950563   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:08.950593   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:11.505428   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:51:11.511228   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:51:11.512855   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:51:11.512881   45954 api_server.go:131] duration metric: took 3.898798182s to wait for apiserver health ...
	I0914 22:51:11.512891   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:51:11.512911   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:11.512954   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:11.544538   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:11.544563   45954 cri.go:89] found id: ""
	I0914 22:51:11.544573   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:11.544629   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.548878   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:11.548946   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:11.578439   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:11.578464   45954 cri.go:89] found id: ""
	I0914 22:51:11.578473   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:11.578531   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.582515   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:11.582576   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:11.611837   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:11.611857   45954 cri.go:89] found id: ""
	I0914 22:51:11.611863   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:11.611917   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.615685   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:11.615744   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:11.645850   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:11.645869   45954 cri.go:89] found id: ""
	I0914 22:51:11.645876   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:11.645914   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.649995   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:11.650048   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:11.683515   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:11.683541   45954 cri.go:89] found id: ""
	I0914 22:51:11.683550   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:11.683604   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.687715   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:11.687806   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:11.721411   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.721428   45954 cri.go:89] found id: ""
	I0914 22:51:11.721434   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:11.721477   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.725801   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:11.725859   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:11.760391   45954 cri.go:89] found id: ""
	I0914 22:51:11.760417   45954 logs.go:284] 0 containers: []
	W0914 22:51:11.760427   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:11.760437   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:11.760498   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:11.792140   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.792162   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:11.792168   45954 cri.go:89] found id: ""
	I0914 22:51:11.792175   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:11.792234   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.796600   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.800888   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:11.800912   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:11.863075   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:11.863106   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:11.877744   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:11.877775   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.930381   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:11.930418   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.961471   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:11.961497   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:12.005391   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:12.005417   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:12.034742   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:12.034771   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:12.064672   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:12.064702   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:12.095801   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:12.095834   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:12.124224   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:12.124260   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:09.974433   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.975389   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.806380   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.807443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:12.657331   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:12.657375   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:12.718197   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:12.718227   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:12.845353   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:12.845381   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:15.395502   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:51:15.395524   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.395529   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.395534   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.395540   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.395544   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.395548   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.395554   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.395559   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.395565   45954 system_pods.go:74] duration metric: took 3.882669085s to wait for pod list to return data ...
	I0914 22:51:15.395572   45954 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:51:15.398128   45954 default_sa.go:45] found service account: "default"
	I0914 22:51:15.398148   45954 default_sa.go:55] duration metric: took 2.571314ms for default service account to be created ...
	I0914 22:51:15.398155   45954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:51:15.407495   45954 system_pods.go:86] 8 kube-system pods found
	I0914 22:51:15.407517   45954 system_pods.go:89] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.407522   45954 system_pods.go:89] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.407527   45954 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.407532   45954 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.407535   45954 system_pods.go:89] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.407540   45954 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.407549   45954 system_pods.go:89] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.407558   45954 system_pods.go:89] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.407576   45954 system_pods.go:126] duration metric: took 9.409452ms to wait for k8s-apps to be running ...
	I0914 22:51:15.407587   45954 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:51:15.407633   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:15.424728   45954 system_svc.go:56] duration metric: took 17.122868ms WaitForService to wait for kubelet.
	I0914 22:51:15.424754   45954 kubeadm.go:581] duration metric: took 4m23.246518879s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:51:15.424794   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:51:15.428492   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:51:15.428520   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:51:15.428534   45954 node_conditions.go:105] duration metric: took 3.733977ms to run NodePressure ...
	I0914 22:51:15.428550   45954 start.go:228] waiting for startup goroutines ...
	I0914 22:51:15.428563   45954 start.go:233] waiting for cluster config update ...
	I0914 22:51:15.428576   45954 start.go:242] writing updated cluster config ...
	I0914 22:51:15.428887   45954 ssh_runner.go:195] Run: rm -f paused
	I0914 22:51:15.479711   45954 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:51:15.482387   45954 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799144" cluster and "default" namespace by default
	I0914 22:51:11.885968   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.887391   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:14.474188   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.974056   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.306146   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.806037   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.386306   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.386406   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:19.474446   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:21.474860   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.375841   46412 pod_ready.go:81] duration metric: took 4m0.000552226s waiting for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:22.375872   46412 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:22.375890   46412 pod_ready.go:38] duration metric: took 4m12.961510371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:22.375915   46412 kubeadm.go:640] restartCluster took 4m33.075347594s
	W0914 22:51:22.375983   46412 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:51:22.376022   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:51:20.806249   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.807141   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:24.809235   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:20.888098   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:23.386482   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:25.386542   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.305114   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:29.306240   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.886695   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:30.385740   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:31.306508   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:33.306655   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:32.886111   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.384925   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.805992   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:38.307801   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:37.385344   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:39.888303   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:40.806212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:43.305815   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:42.388414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:44.388718   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:45.306197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:47.806983   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:49.807150   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:46.885737   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:48.885794   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.115476   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.73941793s)
	I0914 22:51:53.115549   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:53.128821   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:51:53.137267   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:51:53.145533   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:51:53.145569   46412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 22:51:53.202279   46412 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 22:51:53.202501   46412 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:51:53.353512   46412 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:51:53.353674   46412 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:51:53.353816   46412 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:51:53.513428   46412 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:51:53.515450   46412 out.go:204]   - Generating certificates and keys ...
	I0914 22:51:53.515574   46412 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:51:53.515672   46412 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:51:53.515785   46412 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:51:53.515896   46412 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:51:53.516234   46412 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:51:53.516841   46412 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:51:53.517488   46412 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:51:53.517974   46412 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:51:53.518563   46412 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:51:53.519109   46412 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:51:53.519728   46412 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:51:53.519809   46412 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:51:53.641517   46412 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:51:53.842920   46412 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:51:53.982500   46412 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:51:54.065181   46412 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:51:54.065678   46412 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:51:54.071437   46412 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:51:52.305643   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.305996   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:51.386246   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.386956   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.073206   46412 out.go:204]   - Booting up control plane ...
	I0914 22:51:54.073363   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:51:54.073470   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:51:54.073554   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:51:54.091178   46412 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:51:54.091289   46412 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:51:54.091371   46412 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 22:51:54.221867   46412 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:51:56.306473   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:58.306953   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:55.886624   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:57.887222   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:00.385756   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.225144   46412 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002879 seconds
	I0914 22:52:02.225314   46412 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:02.244705   46412 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:02.778808   46412 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:02.779047   46412 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-588699 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 22:52:03.296381   46412 kubeadm.go:322] [bootstrap-token] Using token: x2l9oo.p0a5g5jx49srzji3
	I0914 22:52:03.297976   46412 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:03.298091   46412 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:03.308475   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:52:03.319954   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:03.325968   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:03.330375   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:03.334686   46412 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:03.353185   46412 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:52:03.622326   46412 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:03.721359   46412 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:03.721385   46412 kubeadm.go:322] 
	I0914 22:52:03.721472   46412 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:03.721486   46412 kubeadm.go:322] 
	I0914 22:52:03.721589   46412 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:03.721602   46412 kubeadm.go:322] 
	I0914 22:52:03.721623   46412 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:03.721678   46412 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:03.721727   46412 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:03.721764   46412 kubeadm.go:322] 
	I0914 22:52:03.721856   46412 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 22:52:03.721867   46412 kubeadm.go:322] 
	I0914 22:52:03.721945   46412 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 22:52:03.721954   46412 kubeadm.go:322] 
	I0914 22:52:03.722029   46412 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:03.722137   46412 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:03.722240   46412 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:03.722250   46412 kubeadm.go:322] 
	I0914 22:52:03.722367   46412 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:52:03.722468   46412 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:03.722479   46412 kubeadm.go:322] 
	I0914 22:52:03.722583   46412 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.722694   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:03.722719   46412 kubeadm.go:322] 	--control-plane 
	I0914 22:52:03.722752   46412 kubeadm.go:322] 
	I0914 22:52:03.722887   46412 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:03.722912   46412 kubeadm.go:322] 
	I0914 22:52:03.723031   46412 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.723170   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:03.724837   46412 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:03.724867   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:52:03.724879   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:03.726645   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:03.728115   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:03.741055   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:03.811746   46412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:03.811823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=embed-certs-588699 minikube.k8s.io/updated_at=2023_09_14T22_52_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:03.811827   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:00.805633   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.805831   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.807503   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.885499   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.886940   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.097721   46412 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:04.097763   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.186240   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.773886   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.273494   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.773993   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.274084   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.773309   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.273666   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.773916   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.274226   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.774073   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.807538   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.306062   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:06.886980   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.385212   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.274041   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:09.773409   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.274272   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.774321   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.274268   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.774250   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.273823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.774000   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.273596   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.774284   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.806015   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:14.308997   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:11.386087   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:12.580003   46713 pod_ready.go:81] duration metric: took 4m0.001053291s waiting for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:12.580035   46713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:12.580062   46713 pod_ready.go:38] duration metric: took 4m1.199260232s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:12.580089   46713 kubeadm.go:640] restartCluster took 4m59.591702038s
	W0914 22:52:12.580145   46713 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:52:12.580169   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:52:14.274174   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:14.773472   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.273376   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.773286   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.273920   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.773334   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.926033   46412 kubeadm.go:1081] duration metric: took 13.114277677s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:16.926076   46412 kubeadm.go:406] StartCluster complete in 5m27.664586228s
	I0914 22:52:16.926099   46412 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.926229   46412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:16.928891   46412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.929177   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:16.929313   46412 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:16.929393   46412 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-588699"
	I0914 22:52:16.929408   46412 addons.go:69] Setting default-storageclass=true in profile "embed-certs-588699"
	I0914 22:52:16.929423   46412 addons.go:69] Setting metrics-server=true in profile "embed-certs-588699"
	I0914 22:52:16.929435   46412 addons.go:231] Setting addon metrics-server=true in "embed-certs-588699"
	W0914 22:52:16.929446   46412 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:16.929446   46412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-588699"
	I0914 22:52:16.929475   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:52:16.929508   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929418   46412 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-588699"
	W0914 22:52:16.929533   46412 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:16.929574   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929907   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929938   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929939   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929963   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929968   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929995   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.948975   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41151
	I0914 22:52:16.948990   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0914 22:52:16.948977   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0914 22:52:16.949953   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950006   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.949957   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950601   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950607   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950620   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950626   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950632   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950647   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.951159   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951191   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951410   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951808   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951829   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.951896   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951906   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.951911   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.961182   46412 addons.go:231] Setting addon default-storageclass=true in "embed-certs-588699"
	W0914 22:52:16.961207   46412 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:16.961236   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.961615   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.961637   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.976517   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0914 22:52:16.976730   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0914 22:52:16.977005   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977161   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977448   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977466   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977564   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977589   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977781   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977913   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977966   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.978108   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.980084   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.980429   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.982113   46412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:16.983227   46412 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:16.984383   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:16.984394   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:16.984407   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.983307   46412 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:16.984439   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:16.984455   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.987850   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0914 22:52:16.987989   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988270   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.988771   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.988788   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.988849   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.988867   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988894   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.989058   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.989528   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.989748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.990151   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.990172   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.990441   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:16.990597   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.990766   46412 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-588699" context rescaled to 1 replicas
	I0914 22:52:16.990794   46412 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:16.992351   46412 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:16.990986   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.991129   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.994003   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:16.994015   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.994097   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.994300   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.994607   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.007652   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0914 22:52:17.008127   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:17.008676   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:17.008699   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:17.009115   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:17.009301   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:17.010905   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:17.011169   46412 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.011183   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:17.011201   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:17.014427   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.014837   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:17.014865   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.015132   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:17.015299   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:17.015435   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:17.015585   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.124720   46412 node_ready.go:35] waiting up to 6m0s for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.124831   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:17.128186   46412 node_ready.go:49] node "embed-certs-588699" has status "Ready":"True"
	I0914 22:52:17.128211   46412 node_ready.go:38] duration metric: took 3.459847ms waiting for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.128221   46412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.133021   46412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138574   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.138594   46412 pod_ready.go:81] duration metric: took 5.550933ms waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138605   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151548   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.151569   46412 pod_ready.go:81] duration metric: took 12.956129ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151581   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169368   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.169393   46412 pod_ready.go:81] duration metric: took 17.803681ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169406   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.180202   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:17.180227   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:17.184052   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:17.227381   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:17.227411   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:17.233773   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.293762   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:17.293788   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:17.328911   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.328934   46412 pod_ready.go:81] duration metric: took 159.520585ms waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.328942   46412 pod_ready.go:38] duration metric: took 200.709608ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.328958   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:17.329008   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:17.379085   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:18.947663   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.822786746s)
	I0914 22:52:18.947705   46412 start.go:917] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:19.171809   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.937996858s)
	I0914 22:52:19.171861   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171872   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.98779094s)
	I0914 22:52:19.171908   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171927   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171878   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171875   46412 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.842825442s)
	I0914 22:52:19.172234   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172277   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172292   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172289   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172307   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172322   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172352   46412 api_server.go:72] duration metric: took 2.181532709s to wait for apiserver process to appear ...
	I0914 22:52:19.172322   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172369   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.172377   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172387   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172396   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172410   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:52:19.172625   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172643   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172657   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172667   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172688   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172715   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172723   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172955   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172969   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.173012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.205041   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:52:19.209533   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:19.209561   46412 api_server.go:131] duration metric: took 37.185195ms to wait for apiserver health ...
	I0914 22:52:19.209573   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:19.225866   46412 system_pods.go:59] 7 kube-system pods found
	I0914 22:52:19.225893   46412 system_pods.go:61] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.225900   46412 system_pods.go:61] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.225908   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.225915   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.225921   46412 system_pods.go:61] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.225928   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.225934   46412 system_pods.go:61] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending
	I0914 22:52:19.225947   46412 system_pods.go:74] duration metric: took 16.366454ms to wait for pod list to return data ...
	I0914 22:52:19.225958   46412 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:19.232176   46412 default_sa.go:45] found service account: "default"
	I0914 22:52:19.232202   46412 default_sa.go:55] duration metric: took 6.234795ms for default service account to be created ...
	I0914 22:52:19.232221   46412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:19.238383   46412 system_pods.go:86] 7 kube-system pods found
	I0914 22:52:19.238415   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.238426   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.238433   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.238442   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.238448   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.238454   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.238463   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.238486   46412 retry.go:31] will retry after 271.864835ms: missing components: kube-dns
	I0914 22:52:19.431792   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.052667289s)
	I0914 22:52:19.431858   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.431875   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432217   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432254   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432265   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432277   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.432291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432561   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432615   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432626   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432637   46412 addons.go:467] Verifying addon metrics-server=true in "embed-certs-588699"
	I0914 22:52:19.434406   46412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:15.499654   45407 pod_ready.go:81] duration metric: took 4m0.00095032s waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:15.499683   45407 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:15.499692   45407 pod_ready.go:38] duration metric: took 4m4.80145633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:15.499709   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:15.499741   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:15.499821   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:15.551531   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:15.551573   45407 cri.go:89] found id: ""
	I0914 22:52:15.551584   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:15.551638   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.555602   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:15.555649   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:15.583476   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:15.583497   45407 cri.go:89] found id: ""
	I0914 22:52:15.583504   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:15.583541   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.587434   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:15.587499   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:15.614791   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:15.614813   45407 cri.go:89] found id: ""
	I0914 22:52:15.614821   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:15.614865   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.618758   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:15.618813   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:15.651772   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:15.651798   45407 cri.go:89] found id: ""
	I0914 22:52:15.651807   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:15.651862   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.656464   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:15.656533   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:15.701258   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:15.701289   45407 cri.go:89] found id: ""
	I0914 22:52:15.701299   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:15.701359   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.705980   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:15.706049   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:15.741616   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:15.741640   45407 cri.go:89] found id: ""
	I0914 22:52:15.741647   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:15.741702   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.745863   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:15.745913   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:15.779362   45407 cri.go:89] found id: ""
	I0914 22:52:15.779385   45407 logs.go:284] 0 containers: []
	W0914 22:52:15.779395   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:15.779403   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:15.779462   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:15.815662   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:15.815691   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.815698   45407 cri.go:89] found id: ""
	I0914 22:52:15.815707   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:15.815781   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.820879   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.826312   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:15.826338   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.864143   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:15.864175   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:16.401646   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:16.401689   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:16.442964   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:16.443000   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:16.612411   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:16.612444   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:16.664620   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:16.664652   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:16.702405   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:16.702432   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:16.738583   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:16.738615   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:16.752752   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:16.752788   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:16.793883   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:16.793924   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:16.825504   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:16.825531   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:16.879008   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:16.879046   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:16.910902   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:16.910941   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.477726   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:19.494214   45407 api_server.go:72] duration metric: took 4m15.689238s to wait for apiserver process to appear ...
	I0914 22:52:19.494240   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.494281   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:19.494341   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:19.534990   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:19.535014   45407 cri.go:89] found id: ""
	I0914 22:52:19.535023   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:19.535081   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.540782   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:19.540850   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:19.570364   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:19.570390   45407 cri.go:89] found id: ""
	I0914 22:52:19.570399   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:19.570465   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.575964   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:19.576027   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:19.608023   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:19.608047   45407 cri.go:89] found id: ""
	I0914 22:52:19.608056   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:19.608098   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.612290   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:19.612343   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:19.644658   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:19.644682   45407 cri.go:89] found id: ""
	I0914 22:52:19.644692   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:19.644743   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.651016   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:19.651092   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:19.693035   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:19.693059   45407 cri.go:89] found id: ""
	I0914 22:52:19.693068   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:19.693122   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.697798   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:19.697864   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:19.733805   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.733828   45407 cri.go:89] found id: ""
	I0914 22:52:19.733837   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:19.733890   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.737902   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:19.737976   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:19.765139   45407 cri.go:89] found id: ""
	I0914 22:52:19.765169   45407 logs.go:284] 0 containers: []
	W0914 22:52:19.765180   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:19.765188   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:19.765248   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:19.793734   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.793756   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:19.793761   45407 cri.go:89] found id: ""
	I0914 22:52:19.793767   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:19.793807   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.797559   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.801472   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:19.801492   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:19.937110   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:19.937138   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.987564   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:19.987599   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.436138   46412 addons.go:502] enable addons completed in 2.506819532s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:19.523044   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.523077   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.523089   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.523096   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.523103   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.523109   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.523115   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.523124   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.523137   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.523164   46412 retry.go:31] will retry after 369.359833ms: missing components: kube-dns
	I0914 22:52:19.900488   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.900529   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.900541   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.900550   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.900558   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.900564   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.900571   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.900587   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.900608   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.900630   46412 retry.go:31] will retry after 329.450987ms: missing components: kube-dns
	I0914 22:52:20.245124   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.245152   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.245160   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.245166   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.245171   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.245177   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.245185   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.245194   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.245204   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.245225   46412 retry.go:31] will retry after 392.738624ms: missing components: kube-dns
	I0914 22:52:20.645671   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.645706   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.645716   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.645725   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.645737   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.645747   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.645756   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.645770   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.645783   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.645803   46412 retry.go:31] will retry after 463.608084ms: missing components: kube-dns
	I0914 22:52:21.118889   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:21.118920   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Running
	I0914 22:52:21.118926   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:21.118931   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:21.118937   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:21.118941   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:21.118946   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:21.118954   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:21.118963   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:21.118971   46412 system_pods.go:126] duration metric: took 1.886741356s to wait for k8s-apps to be running ...
	I0914 22:52:21.118984   46412 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:21.119025   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:21.134331   46412 system_svc.go:56] duration metric: took 15.34035ms WaitForService to wait for kubelet.
	I0914 22:52:21.134358   46412 kubeadm.go:581] duration metric: took 4.143541631s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:21.134381   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:21.137182   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:21.137207   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:21.137230   46412 node_conditions.go:105] duration metric: took 2.834168ms to run NodePressure ...
	I0914 22:52:21.137243   46412 start.go:228] waiting for startup goroutines ...
	I0914 22:52:21.137252   46412 start.go:233] waiting for cluster config update ...
	I0914 22:52:21.137272   46412 start.go:242] writing updated cluster config ...
	I0914 22:52:21.137621   46412 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:21.184252   46412 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:21.186251   46412 out.go:177] * Done! kubectl is now configured to use "embed-certs-588699" cluster and "default" namespace by default
	I0914 22:52:20.022483   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:20.022512   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:20.062375   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:20.062403   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:20.099744   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:20.099776   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:20.129490   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:20.129515   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:20.165896   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:20.165922   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:20.692724   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:20.692758   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:20.761038   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:20.761086   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:20.777087   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:20.777114   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:20.808980   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:20.809020   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:20.845904   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:20.845942   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.393816   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:52:23.399946   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:52:23.401251   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:23.401271   45407 api_server.go:131] duration metric: took 3.907024801s to wait for apiserver health ...
	I0914 22:52:23.401279   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:23.401303   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:23.401346   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:23.433871   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.433895   45407 cri.go:89] found id: ""
	I0914 22:52:23.433905   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:23.433962   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.438254   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:23.438317   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:23.468532   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:23.468555   45407 cri.go:89] found id: ""
	I0914 22:52:23.468564   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:23.468626   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.473599   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:23.473658   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:23.509951   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:23.509976   45407 cri.go:89] found id: ""
	I0914 22:52:23.509986   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:23.510041   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.516637   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:23.516722   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:23.549562   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.549587   45407 cri.go:89] found id: ""
	I0914 22:52:23.549596   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:23.549653   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.553563   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:23.553626   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:23.584728   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:23.584749   45407 cri.go:89] found id: ""
	I0914 22:52:23.584756   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:23.584797   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.588600   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:23.588653   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:23.616590   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.616609   45407 cri.go:89] found id: ""
	I0914 22:52:23.616617   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:23.616669   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.620730   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:23.620782   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:23.648741   45407 cri.go:89] found id: ""
	I0914 22:52:23.648765   45407 logs.go:284] 0 containers: []
	W0914 22:52:23.648773   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:23.648781   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:23.648831   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:23.680814   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:23.680839   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:23.680846   45407 cri.go:89] found id: ""
	I0914 22:52:23.680854   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:23.680914   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.685954   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.690428   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:23.690459   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:23.818421   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:23.818456   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.867863   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:23.867894   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.903362   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:23.903393   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:23.943793   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:23.943820   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:24.538337   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:24.538390   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:24.585031   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:24.585072   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:24.639086   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:24.639120   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:24.650905   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:24.650925   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:24.698547   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:24.698590   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:24.745590   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:24.745619   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:24.777667   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:24.777697   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:24.811536   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:24.811565   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:25.132299   46713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (12.552094274s)
	I0914 22:52:25.132371   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:25.146754   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:52:25.155324   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:52:25.164387   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:52:25.164429   46713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0914 22:52:25.227970   46713 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0914 22:52:25.228029   46713 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:52:25.376482   46713 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:52:25.376603   46713 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:52:25.376721   46713 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:52:25.536163   46713 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:52:25.536339   46713 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:52:25.543555   46713 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0914 22:52:25.663579   46713 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:52:25.665315   46713 out.go:204]   - Generating certificates and keys ...
	I0914 22:52:25.665428   46713 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:52:25.665514   46713 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:52:25.665610   46713 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:52:25.665688   46713 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:52:25.665777   46713 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:52:25.665844   46713 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:52:25.665925   46713 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:52:25.666002   46713 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:52:25.666095   46713 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:52:25.666223   46713 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:52:25.666277   46713 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:52:25.666352   46713 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:52:25.931689   46713 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:52:26.088693   46713 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:52:26.251867   46713 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:52:26.566157   46713 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:52:26.567520   46713 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:52:27.360740   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:52:27.360780   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.360788   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.360795   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.360802   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.360809   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.360816   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.360827   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.360841   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.360848   45407 system_pods.go:74] duration metric: took 3.959563404s to wait for pod list to return data ...
	I0914 22:52:27.360859   45407 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:27.363690   45407 default_sa.go:45] found service account: "default"
	I0914 22:52:27.363715   45407 default_sa.go:55] duration metric: took 2.849311ms for default service account to be created ...
	I0914 22:52:27.363724   45407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:27.372219   45407 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:27.372520   45407 system_pods.go:89] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.372552   45407 system_pods.go:89] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.372571   45407 system_pods.go:89] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.372590   45407 system_pods.go:89] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.372602   45407 system_pods.go:89] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.372616   45407 system_pods.go:89] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.372744   45407 system_pods.go:89] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.372835   45407 system_pods.go:89] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.372845   45407 system_pods.go:126] duration metric: took 9.100505ms to wait for k8s-apps to be running ...
	I0914 22:52:27.372854   45407 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:27.373084   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:27.390112   45407 system_svc.go:56] duration metric: took 17.249761ms WaitForService to wait for kubelet.
	I0914 22:52:27.390137   45407 kubeadm.go:581] duration metric: took 4m23.585167656s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:27.390174   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:27.393099   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:27.393123   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:27.393133   45407 node_conditions.go:105] duration metric: took 2.953927ms to run NodePressure ...
	I0914 22:52:27.393142   45407 start.go:228] waiting for startup goroutines ...
	I0914 22:52:27.393148   45407 start.go:233] waiting for cluster config update ...
	I0914 22:52:27.393156   45407 start.go:242] writing updated cluster config ...
	I0914 22:52:27.393379   45407 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:27.441228   45407 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:27.442889   45407 out.go:177] * Done! kubectl is now configured to use "no-preload-344363" cluster and "default" namespace by default
	I0914 22:52:26.569354   46713 out.go:204]   - Booting up control plane ...
	I0914 22:52:26.569484   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:52:26.582407   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:52:26.589858   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:52:26.591607   46713 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:52:26.596764   46713 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:52:37.101083   46713 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503887 seconds
	I0914 22:52:37.101244   46713 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:37.116094   46713 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:37.633994   46713 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:37.634186   46713 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-930717 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 22:52:38.144071   46713 kubeadm.go:322] [bootstrap-token] Using token: jnf2g9.h0rslaob8wj902ym
	I0914 22:52:38.145543   46713 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:38.145661   46713 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:38.153514   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:38.159575   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:38.164167   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:38.167903   46713 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:38.241317   46713 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:38.572283   46713 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:38.572309   46713 kubeadm.go:322] 
	I0914 22:52:38.572399   46713 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:38.572410   46713 kubeadm.go:322] 
	I0914 22:52:38.572526   46713 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:38.572547   46713 kubeadm.go:322] 
	I0914 22:52:38.572581   46713 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:38.572669   46713 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:38.572762   46713 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:38.572775   46713 kubeadm.go:322] 
	I0914 22:52:38.572836   46713 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:38.572926   46713 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:38.573012   46713 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:38.573020   46713 kubeadm.go:322] 
	I0914 22:52:38.573089   46713 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0914 22:52:38.573152   46713 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:38.573159   46713 kubeadm.go:322] 
	I0914 22:52:38.573222   46713 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573313   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:38.573336   46713 kubeadm.go:322]     --control-plane 	  
	I0914 22:52:38.573343   46713 kubeadm.go:322] 
	I0914 22:52:38.573406   46713 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:38.573414   46713 kubeadm.go:322] 
	I0914 22:52:38.573527   46713 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573687   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:38.574219   46713 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:38.574248   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:52:38.574261   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:38.575900   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:38.577300   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:38.587120   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:38.610197   46713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:38.610265   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.610267   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=old-k8s-version-930717 minikube.k8s.io/updated_at=2023_09_14T22_52_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.858082   46713 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:38.858297   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.960045   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:39.549581   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.049788   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.549998   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.049043   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.549875   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.049596   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.549039   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.049563   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.549663   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.049534   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.549938   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.049227   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.549171   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.049628   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.550019   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.049857   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.549272   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.049648   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.549709   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.049770   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.550050   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.048948   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.549154   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.049695   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.549811   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.049813   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.549858   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.049505   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.149056   46713 kubeadm.go:1081] duration metric: took 14.538858246s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:53.149093   46713 kubeadm.go:406] StartCluster complete in 5m40.2118148s
	I0914 22:52:53.149114   46713 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.149200   46713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:53.150928   46713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.151157   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:53.151287   46713 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:53.151382   46713 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151391   46713 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151405   46713 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-930717"
	I0914 22:52:53.151411   46713 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-930717"
	W0914 22:52:53.151413   46713 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:53.151419   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:52:53.151423   46713 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-930717"
	W0914 22:52:53.151433   46713 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:53.151479   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151412   46713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-930717"
	I0914 22:52:53.151484   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151796   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151820   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151958   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.152044   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.170764   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0914 22:52:53.170912   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0914 22:52:53.171012   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0914 22:52:53.171235   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171345   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171378   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171850   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171870   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171970   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171991   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171999   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.172019   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.172232   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172517   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172572   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172759   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.172910   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.172987   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.173110   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.173146   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.189453   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0914 22:52:53.189789   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.190229   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.190251   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.190646   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.190822   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.192990   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.195176   46713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:53.194738   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0914 22:52:53.196779   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:53.196797   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:53.196813   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.195752   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.197457   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.197476   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.197849   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.198026   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.200022   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.200176   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.201917   46713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:53.200654   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.200795   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.203540   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.203632   46713 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.203652   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.203844   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.204002   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.206460   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.206968   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.206998   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.207153   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.207303   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.207524   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.207672   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.253944   46713 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-930717"
	W0914 22:52:53.253968   46713 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:53.253990   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.254330   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.254377   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0914 22:52:53.270047   46713 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-930717" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0914 22:52:53.270077   46713 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0914 22:52:53.270099   46713 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:53.271730   46713 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:53.270422   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0914 22:52:53.273255   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:53.273653   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.274180   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.274206   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.274559   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.275121   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.275165   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.291000   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0914 22:52:53.291405   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.291906   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.291927   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.292312   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.292529   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.294366   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.294583   46713 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.294598   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:53.294611   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.297265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.297809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297895   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.298057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.298236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.298383   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.344235   46713 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.344478   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:53.350176   46713 node_ready.go:49] node "old-k8s-version-930717" has status "Ready":"True"
	I0914 22:52:53.350196   46713 node_ready.go:38] duration metric: took 5.934445ms waiting for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.350204   46713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:53.359263   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:53.359296   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:53.367792   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:53.384576   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.397687   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:53.397703   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:53.439813   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:53.439843   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:53.473431   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.499877   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:54.233171   46713 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:54.365130   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365156   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365178   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365198   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365438   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365465   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365476   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365481   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.365486   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365546   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365556   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365565   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365574   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367064   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367090   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367068   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367489   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367513   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367526   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.367540   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367489   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367757   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367810   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367852   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.830646   46713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330728839s)
	I0914 22:52:54.830698   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.830711   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831036   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831059   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831065   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.831080   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.831096   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831312   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831328   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831338   46713 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-930717"
	I0914 22:52:54.832992   46713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:54.834828   46713 addons.go:502] enable addons completed in 1.683549699s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:55.415046   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:57.878279   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:59.879299   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:01.879559   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:03.880088   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:05.880334   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.880355   46713 pod_ready.go:81] duration metric: took 12.512536425s waiting for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.880364   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885370   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.885386   46713 pod_ready.go:81] duration metric: took 5.016722ms waiting for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885394   46713 pod_ready.go:38] duration metric: took 12.535181673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:53:05.885413   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:53:05.885466   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:53:05.901504   46713 api_server.go:72] duration metric: took 12.631380008s to wait for apiserver process to appear ...
	I0914 22:53:05.901522   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:53:05.901534   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:53:05.907706   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:53:05.908445   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:53:05.908466   46713 api_server.go:131] duration metric: took 6.937898ms to wait for apiserver health ...
	I0914 22:53:05.908475   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:53:05.911983   46713 system_pods.go:59] 5 kube-system pods found
	I0914 22:53:05.912001   46713 system_pods.go:61] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.912008   46713 system_pods.go:61] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.912013   46713 system_pods.go:61] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.912022   46713 system_pods.go:61] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.912033   46713 system_pods.go:61] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.912043   46713 system_pods.go:74] duration metric: took 3.562804ms to wait for pod list to return data ...
	I0914 22:53:05.912054   46713 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:53:05.914248   46713 default_sa.go:45] found service account: "default"
	I0914 22:53:05.914267   46713 default_sa.go:55] duration metric: took 2.203622ms for default service account to be created ...
	I0914 22:53:05.914276   46713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:53:05.917292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:05.917310   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.917315   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.917319   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.917325   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.917331   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.917343   46713 retry.go:31] will retry after 277.910308ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.201147   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.201170   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.201175   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.201179   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.201185   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.201191   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.201205   46713 retry.go:31] will retry after 262.96693ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.470372   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.470410   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.470418   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.470425   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.470435   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.470446   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.470481   46713 retry.go:31] will retry after 486.428451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.961666   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.961693   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.961700   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.961706   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.961716   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.961724   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.961740   46713 retry.go:31] will retry after 524.467148ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:07.491292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:07.491315   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:07.491321   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:07.491325   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:07.491331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:07.491337   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:07.491370   46713 retry.go:31] will retry after 567.308028ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.063587   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.063612   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.063618   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.063622   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.063629   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.063635   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.063649   46713 retry.go:31] will retry after 723.150919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.791530   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.791561   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.791571   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.791578   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.791588   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.791597   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.791616   46713 retry.go:31] will retry after 1.173741151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:09.971866   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:09.971895   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:09.971903   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:09.971909   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:09.971919   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:09.971928   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:09.971946   46713 retry.go:31] will retry after 1.046713916s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:11.024191   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:11.024220   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:11.024226   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:11.024231   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:11.024238   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:11.024244   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:11.024260   46713 retry.go:31] will retry after 1.531910243s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:12.562517   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:12.562555   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:12.562564   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:12.562573   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:12.562584   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:12.562594   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:12.562612   46713 retry.go:31] will retry after 2.000243773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:14.570247   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:14.570284   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:14.570294   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:14.570303   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:14.570320   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:14.570329   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:14.570346   46713 retry.go:31] will retry after 2.095330784s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:16.670345   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:16.670372   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:16.670377   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:16.670382   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:16.670394   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:16.670401   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:16.670416   46713 retry.go:31] will retry after 2.811644755s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:19.488311   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:19.488339   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:19.488344   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:19.488348   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:19.488354   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:19.488362   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:19.488380   46713 retry.go:31] will retry after 3.274452692s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:22.768417   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:22.768446   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:22.768454   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:22.768461   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:22.768471   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:22.768481   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:22.768499   46713 retry.go:31] will retry after 5.52037196s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:28.294932   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:28.294958   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:28.294964   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:28.294967   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:28.294975   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:28.294980   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:28.294994   46713 retry.go:31] will retry after 4.305647383s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:32.605867   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:32.605894   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:32.605900   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:32.605903   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:32.605910   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:32.605915   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:32.605929   46713 retry.go:31] will retry after 8.214918081s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:40.825284   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:40.825314   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:40.825319   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:40.825324   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:40.825331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:40.825336   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:40.825352   46713 retry.go:31] will retry after 10.5220598s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:51.353809   46713 system_pods.go:86] 7 kube-system pods found
	I0914 22:53:51.353844   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:51.353851   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:51.353856   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Pending
	I0914 22:53:51.353862   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:51.353868   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Pending
	I0914 22:53:51.353878   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:51.353887   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:51.353907   46713 retry.go:31] will retry after 10.482387504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:54:01.842876   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:01.842900   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:01.842905   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:01.842909   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Pending
	I0914 22:54:01.842914   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:01.842918   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Pending
	I0914 22:54:01.842921   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:01.842925   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:01.842931   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:01.842937   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:01.842950   46713 retry.go:31] will retry after 14.535469931s: missing components: etcd, kube-controller-manager
	I0914 22:54:16.384703   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:16.384732   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:16.384738   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:16.384742   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Running
	I0914 22:54:16.384747   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:16.384751   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Running
	I0914 22:54:16.384754   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:16.384758   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:16.384766   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:16.384773   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:16.384782   46713 system_pods.go:126] duration metric: took 1m10.470499333s to wait for k8s-apps to be running ...
	I0914 22:54:16.384791   46713 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:54:16.384849   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:54:16.409329   46713 system_svc.go:56] duration metric: took 24.530447ms WaitForService to wait for kubelet.
	I0914 22:54:16.409359   46713 kubeadm.go:581] duration metric: took 1m23.139238057s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:54:16.409385   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:54:16.412461   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:54:16.412490   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:54:16.412505   46713 node_conditions.go:105] duration metric: took 3.107771ms to run NodePressure ...
	I0914 22:54:16.412519   46713 start.go:228] waiting for startup goroutines ...
	I0914 22:54:16.412529   46713 start.go:233] waiting for cluster config update ...
	I0914 22:54:16.412546   46713 start.go:242] writing updated cluster config ...
	I0914 22:54:16.412870   46713 ssh_runner.go:195] Run: rm -f paused
	I0914 22:54:16.460181   46713 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I0914 22:54:16.461844   46713 out.go:177] 
	W0914 22:54:16.463221   46713 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0914 22:54:16.464486   46713 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0914 22:54:16.465912   46713 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-930717" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:46:53 UTC, ends at Thu 2023-09-14 23:03:18 UTC. --
	Sep 14 23:03:17 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:17.982609957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dd8e1f8f-cc41-4e68-966d-e57fb401b307 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:17 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:17.982793925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd8e1f8f-cc41-4e68-966d-e57fb401b307 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.016556811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b061d94d-7f6c-4eda-8584-48fb22e8c663 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.016654224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b061d94d-7f6c-4eda-8584-48fb22e8c663 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.016869387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b061d94d-7f6c-4eda-8584-48fb22e8c663 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.054698289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=78ddf838-d6f2-43b6-b8b2-5784cb6fdab9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.054781664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=78ddf838-d6f2-43b6-b8b2-5784cb6fdab9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.055030134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=78ddf838-d6f2-43b6-b8b2-5784cb6fdab9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.088249694Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8c5617b9-5595-4163-8d2e-0ca1376fda10 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.088328902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8c5617b9-5595-4163-8d2e-0ca1376fda10 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.088654194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8c5617b9-5595-4163-8d2e-0ca1376fda10 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.121098182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4d5f3bf4-6133-4a06-a5cf-5f8e160fa349 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.121183158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4d5f3bf4-6133-4a06-a5cf-5f8e160fa349 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.121374531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4d5f3bf4-6133-4a06-a5cf-5f8e160fa349 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.160260019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5574b731-a5ce-475b-a655-7805ef0e58bf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.160320216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5574b731-a5ce-475b-a655-7805ef0e58bf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.160561681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5574b731-a5ce-475b-a655-7805ef0e58bf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.186782323Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=6a93b6ea-c24f-4561-94a7-23144119d673 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.187065185Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e05f4d2d470fa02f75020c87558ff7ef4603e48a88d422dad95638c8029c2fd5,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-qjxtc,Uid:995d5d99-10f4-4928-b384-79e5b03b9a2b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731975865197453,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-qjxtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 995d5d99-10f4-4928-b384-79e5b03b9a2b,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:52:55.527316259Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-5dhgr,Uid:009c9ce3-6e97-44a7-89f5-7a456
6be5b1b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731974926194905,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:52:54.581455865Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-zh279,Uid:06e39db3-fd3a-4919-aa49-4aa8b21f59b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731974903659666,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotatio
ns:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:52:54.561666645Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:960b6941-9167-4b87-b0f8-4fd4ad1227aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731974707864821,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"container
s\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-14T22:52:54.361280495Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&PodSandboxMetadata{Name:kube-proxy-78njr,Uid:0704238a-5fb8-46d4-912c-4bbf7f419a12,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731973955830572,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,k8s-app: kube-proxy,pod-
template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:52:53.605214778Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-930717,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731947003073520,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-09-14T22:52:26.602308574Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&PodSandboxMetadata{Name:kub
e-controller-manager-old-k8s-version-930717,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731946997076712,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-09-14T22:52:26.598129749Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-930717,Uid:381b3b581ff73227b3cba8e1c96bc6c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731946981798540,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8
s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 381b3b581ff73227b3cba8e1c96bc6c0,kubernetes.io/config.seen: 2023-09-14T22:52:26.608716198Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-930717,Uid:ce1dcffe2ddeeabea9e697b171701efa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731946940153372,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ce1dcffe2ddeeabea9e697b171701efa,kubernetes.io/config.seen: 2023-09-14T22:52:26.597636308Z
,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=6a93b6ea-c24f-4561-94a7-23144119d673 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.187911284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a53a04c5-1d5c-4535-ad1d-80e0aeca742a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.187993853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a53a04c5-1d5c-4535-ad1d-80e0aeca742a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.188226277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a53a04c5-1d5c-4535-ad1d-80e0aeca742a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.198933550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5753a939-c4c3-4cd5-ac74-0a4ad428809a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.201587239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5753a939-c4c3-4cd5-ac74-0a4ad428809a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:03:18 old-k8s-version-930717 crio[713]: time="2023-09-14 23:03:18.202462613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5753a939-c4c3-4cd5-ac74-0a4ad428809a name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	505f9a835ea06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   c51a2bdff31e0
	9d3dabddbe65e       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   9ab70cb9a88a0
	89d3f1675bb2d       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   b96fab0054704
	3666121471cff       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   f213c4a0a6e67
	ea4aa381d0367       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   90de897d887d7
	f3780dded8c30       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   0e249a91091e3
	220096e104c5c       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   54731262bac6e
	8df143c7256d3       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   4ae89dfeddff5
	
	* 
	* ==> coredns [3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811] <==
	* .:53
	2023-09-14T22:52:55.563Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-09-14T22:52:55.563Z [INFO] CoreDNS-1.6.2
	2023-09-14T22:52:55.563Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-09-14T22:52:55.571Z [INFO] 127.0.0.1:40315 - 13800 "HINFO IN 1364800437933321559.4221059419903132037. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007273829s
	
	* 
	* ==> coredns [89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e] <==
	* .:53
	2023-09-14T22:52:55.602Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-09-14T22:52:55.602Z [INFO] CoreDNS-1.6.2
	2023-09-14T22:52:55.602Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-09-14T22:52:55.617Z [INFO] 127.0.0.1:56844 - 3262 "HINFO IN 9187367119679096330.7872013698849296893. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014555503s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-930717
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-930717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=old-k8s-version-930717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_52_38_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:52:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 23:02:33 +0000   Thu, 14 Sep 2023 22:52:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 23:02:33 +0000   Thu, 14 Sep 2023 22:52:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 23:02:33 +0000   Thu, 14 Sep 2023 22:52:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 23:02:33 +0000   Thu, 14 Sep 2023 22:52:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.70
	  Hostname:    old-k8s-version-930717
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 820a4887f0bd47b9a114e5e546ca5e2b
	 System UUID:                820a4887-f0bd-47b9-a114-e5e546ca5e2b
	 Boot ID:                    4e318042-261b-4123-9603-549b1ecafd50
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-5dhgr                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                coredns-5644d7b6d9-zh279                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-930717                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                kube-apiserver-old-k8s-version-930717             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                kube-controller-manager-old-k8s-version-930717    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                kube-proxy-78njr                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-930717             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                metrics-server-74d5856cc6-qjxtc                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             340Mi (16%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-930717     Node old-k8s-version-930717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-930717     Node old-k8s-version-930717 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-930717     Node old-k8s-version-930717 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-930717  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep14 22:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.087007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.431604] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.750359] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133117] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.363433] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep14 22:47] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.132272] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.155114] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.127414] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.253365] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +20.232780] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[  +0.435946] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.414608] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.924168] kauditd_printk_skb: 2 callbacks suppressed
	[Sep14 22:52] systemd-fstab-generator[3085]: Ignoring "noauto" for root device
	[  +0.751000] kauditd_printk_skb: 6 callbacks suppressed
	[Sep14 22:53] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974] <==
	* 2023-09-14 22:52:29.340729 I | raft: newRaft 3268eeb9c599aeb4 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-09-14 22:52:29.340744 I | raft: 3268eeb9c599aeb4 became follower at term 1
	2023-09-14 22:52:29.348380 W | auth: simple token is not cryptographically signed
	2023-09-14 22:52:29.352705 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-14 22:52:29.353825 I | etcdserver: 3268eeb9c599aeb4 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-14 22:52:29.354527 I | etcdserver/membership: added member 3268eeb9c599aeb4 [https://192.168.72.70:2380] to cluster 96a33227e2b23009
	2023-09-14 22:52:29.355127 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-14 22:52:29.355251 I | embed: listening for metrics on http://192.168.72.70:2381
	2023-09-14 22:52:29.355330 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-14 22:52:30.341261 I | raft: 3268eeb9c599aeb4 is starting a new election at term 1
	2023-09-14 22:52:30.341374 I | raft: 3268eeb9c599aeb4 became candidate at term 2
	2023-09-14 22:52:30.341392 I | raft: 3268eeb9c599aeb4 received MsgVoteResp from 3268eeb9c599aeb4 at term 2
	2023-09-14 22:52:30.341401 I | raft: 3268eeb9c599aeb4 became leader at term 2
	2023-09-14 22:52:30.341406 I | raft: raft.node: 3268eeb9c599aeb4 elected leader 3268eeb9c599aeb4 at term 2
	2023-09-14 22:52:30.341681 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-14 22:52:30.343264 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-14 22:52:30.343673 I | etcdserver: published {Name:old-k8s-version-930717 ClientURLs:[https://192.168.72.70:2379]} to cluster 96a33227e2b23009
	2023-09-14 22:52:30.343740 I | embed: ready to serve client requests
	2023-09-14 22:52:30.343939 I | embed: ready to serve client requests
	2023-09-14 22:52:30.345112 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-14 22:52:30.346347 I | embed: serving client requests on 192.168.72.70:2379
	2023-09-14 22:52:30.346453 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-14 22:52:54.707905 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-zh279\" " with result "range_response_count:1 size:1367" took too long (133.321798ms) to execute
	2023-09-14 23:02:30.370007 I | mvcc: store.index: compact 667
	2023-09-14 23:02:30.371870 I | mvcc: finished scheduled compaction at 667 (took 1.330432ms)
	
	* 
	* ==> kernel <==
	*  23:03:18 up 16 min,  0 users,  load average: 0.07, 0.15, 0.17
	Linux old-k8s-version-930717 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290] <==
	* I0914 22:55:56.275406       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0914 22:55:56.275564       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 22:55:56.275634       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:55:56.275643       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 22:57:34.513641       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0914 22:57:34.513959       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 22:57:34.514080       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:57:34.514110       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 22:58:34.514411       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0914 22:58:34.514591       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 22:58:34.514651       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:58:34.514659       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:00:34.515123       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0914 23:00:34.515251       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 23:00:34.515330       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:00:34.515341       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:02:34.516670       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0914 23:02:34.516802       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 23:02:34.516881       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:02:34.516891       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec] <==
	* E0914 22:56:55.371700       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 22:57:09.467425       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 22:57:25.624105       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 22:57:41.469289       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 22:57:55.875953       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 22:58:13.471115       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 22:58:26.127898       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 22:58:45.473098       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 22:58:56.379777       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 22:59:17.475096       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 22:59:26.631402       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 22:59:49.477116       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 22:59:56.883110       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:00:21.479125       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:00:27.135105       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:00:53.481769       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:00:57.386754       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:01:25.484407       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:01:27.638695       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:01:57.486795       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:01:57.891070       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0914 23:02:28.142940       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:02:29.489196       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:02:58.394767       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:03:01.490877       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237] <==
	* W0914 22:52:55.944294       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0914 22:52:55.961939       1 node.go:135] Successfully retrieved node IP: 192.168.72.70
	I0914 22:52:55.962054       1 server_others.go:149] Using iptables Proxier.
	I0914 22:52:55.966347       1 server.go:529] Version: v1.16.0
	I0914 22:52:55.970906       1 config.go:131] Starting endpoints config controller
	I0914 22:52:55.973917       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0914 22:52:55.976561       1 config.go:313] Starting service config controller
	I0914 22:52:55.976844       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0914 22:52:56.074283       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0914 22:52:56.077936       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c] <==
	* I0914 22:52:33.539241       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0914 22:52:33.599607       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:52:33.599972       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:52:33.600184       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 22:52:33.600422       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:52:33.602591       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:33.602704       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:52:33.602786       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:52:33.602840       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:33.604853       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 22:52:33.605163       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:52:33.606967       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 22:52:34.600891       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:52:34.604276       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:52:34.607721       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 22:52:34.608990       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:52:34.609904       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:34.611693       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:52:34.612793       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:52:34.615291       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 22:52:34.615579       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:34.616589       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:52:34.617332       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 22:52:53.256657       1 factory.go:585] pod is already present in the activeQ
	E0914 22:52:53.308005       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:46:53 UTC, ends at Thu 2023-09-14 23:03:18 UTC. --
	Sep 14 22:58:38 old-k8s-version-930717 kubelet[3091]: E0914 22:58:38.292143    3091 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 14 22:58:38 old-k8s-version-930717 kubelet[3091]: E0914 22:58:38.292252    3091 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 14 22:58:38 old-k8s-version-930717 kubelet[3091]: E0914 22:58:38.292324    3091 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 14 22:58:38 old-k8s-version-930717 kubelet[3091]: E0914 22:58:38.292358    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Sep 14 22:58:50 old-k8s-version-930717 kubelet[3091]: E0914 22:58:50.275972    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 22:59:02 old-k8s-version-930717 kubelet[3091]: E0914 22:59:02.276590    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 22:59:16 old-k8s-version-930717 kubelet[3091]: E0914 22:59:16.275823    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 22:59:31 old-k8s-version-930717 kubelet[3091]: E0914 22:59:31.275234    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 22:59:45 old-k8s-version-930717 kubelet[3091]: E0914 22:59:45.274773    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 22:59:59 old-k8s-version-930717 kubelet[3091]: E0914 22:59:59.275357    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:00:14 old-k8s-version-930717 kubelet[3091]: E0914 23:00:14.275784    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:00:27 old-k8s-version-930717 kubelet[3091]: E0914 23:00:27.274711    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:00:42 old-k8s-version-930717 kubelet[3091]: E0914 23:00:42.275245    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:00:54 old-k8s-version-930717 kubelet[3091]: E0914 23:00:54.276906    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:01:08 old-k8s-version-930717 kubelet[3091]: E0914 23:01:08.274562    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:01:19 old-k8s-version-930717 kubelet[3091]: E0914 23:01:19.275154    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:01:31 old-k8s-version-930717 kubelet[3091]: E0914 23:01:31.275347    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:01:46 old-k8s-version-930717 kubelet[3091]: E0914 23:01:46.274906    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:02:00 old-k8s-version-930717 kubelet[3091]: E0914 23:02:00.274874    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:02:14 old-k8s-version-930717 kubelet[3091]: E0914 23:02:14.275420    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:02:25 old-k8s-version-930717 kubelet[3091]: E0914 23:02:25.274891    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:02:26 old-k8s-version-930717 kubelet[3091]: E0914 23:02:26.334414    3091 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Sep 14 23:02:38 old-k8s-version-930717 kubelet[3091]: E0914 23:02:38.275535    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:02:51 old-k8s-version-930717 kubelet[3091]: E0914 23:02:51.275746    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:03:06 old-k8s-version-930717 kubelet[3091]: E0914 23:03:06.275323    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0] <==
	* I0914 22:52:56.124552       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:52:56.136628       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:52:56.136687       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:52:56.149522       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:52:56.149983       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-930717_0fb98f8f-e029-479c-8cf4-8ebaed133129!
	I0914 22:52:56.154117       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea805da0-96d9-43f4-897c-c2a3a4575986", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-930717_0fb98f8f-e029-479c-8cf4-8ebaed133129 became leader
	I0914 22:52:56.261052       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-930717_0fb98f8f-e029-479c-8cf4-8ebaed133129!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-930717 -n old-k8s-version-930717
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-930717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-qjxtc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-930717 describe pod metrics-server-74d5856cc6-qjxtc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-930717 describe pod metrics-server-74d5856cc6-qjxtc: exit status 1 (61.065402ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-qjxtc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-930717 describe pod metrics-server-74d5856cc6-qjxtc: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (525.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-14 23:09:02.589984751 +0000 UTC m=+5564.784326386
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-799144 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-799144 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.454µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-799144 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-799144 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-799144 logs -n 25: (1.129086596s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799144       | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-930717        | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:51 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-588699                 | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-930717             | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC | 14 Sep 23 22:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-561154 | jenkins | v1.31.2 | 14 Sep 23 23:07 UTC | 14 Sep 23 23:07 UTC |
	|         | disable-driver-mounts-561154                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-395546 --memory=2200 --alsologtostderr   | newest-cni-395546            | jenkins | v1.31.2 | 14 Sep 23 23:07 UTC | 14 Sep 23 23:08 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 23:07 UTC | 14 Sep 23 23:07 UTC |
	| start   | -p auto-104104 --memory=3072                           | auto-104104                  | jenkins | v1.31.2 | 14 Sep 23 23:07 UTC | 14 Sep 23 23:08 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-395546             | newest-cni-395546            | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC | 14 Sep 23 23:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-395546                                   | newest-cni-395546            | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC | 14 Sep 23 23:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-395546                  | newest-cni-395546            | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC | 14 Sep 23 23:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-395546 --memory=2200 --alsologtostderr   | newest-cni-395546            | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC | 14 Sep 23 23:08 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| ssh     | -p auto-104104 pgrep -a                                | auto-104104                  | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC | 14 Sep 23 23:08 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-395546 sudo                              | newest-cni-395546            | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC | 14 Sep 23 23:08 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-395546                                   | newest-cni-395546            | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC | 14 Sep 23 23:08 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-395546                                   | newest-cni-395546            | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC | 14 Sep 23 23:08 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-395546                                   | newest-cni-395546            | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC | 14 Sep 23 23:09 UTC |
	| delete  | -p newest-cni-395546                                   | newest-cni-395546            | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	| start   | -p kindnet-104104                                      | kindnet-104104               | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 23:09:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 23:09:00.789651   53243 out.go:296] Setting OutFile to fd 1 ...
	I0914 23:09:00.789800   53243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:09:00.789812   53243 out.go:309] Setting ErrFile to fd 2...
	I0914 23:09:00.789819   53243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:09:00.790027   53243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 23:09:00.790585   53243 out.go:303] Setting JSON to false
	I0914 23:09:00.791546   53243 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6683,"bootTime":1694726258,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 23:09:00.791610   53243 start.go:138] virtualization: kvm guest
	I0914 23:09:00.793944   53243 out.go:177] * [kindnet-104104] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 23:09:00.795576   53243 notify.go:220] Checking for updates...
	I0914 23:09:00.795580   53243 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 23:09:00.797012   53243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:09:00.798458   53243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 23:09:00.799856   53243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 23:09:00.801151   53243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 23:09:00.802441   53243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:09:00.804168   53243 config.go:182] Loaded profile config "auto-104104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:09:00.804292   53243 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:09:00.804411   53243 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:09:00.804522   53243 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 23:09:00.843274   53243 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 23:09:00.844914   53243 start.go:298] selected driver: kvm2
	I0914 23:09:00.844974   53243 start.go:902] validating driver "kvm2" against <nil>
	I0914 23:09:00.844992   53243 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:09:00.845665   53243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:09:00.845759   53243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 23:09:00.861011   53243 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 23:09:00.861061   53243 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 23:09:00.861285   53243 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:09:00.861344   53243 cni.go:84] Creating CNI manager for "kindnet"
	I0914 23:09:00.861358   53243 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 23:09:00.861375   53243 start_flags.go:321] config:
	{Name:kindnet-104104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-104104 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:09:00.861517   53243 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:09:00.863259   53243 out.go:177] * Starting control plane node kindnet-104104 in cluster kindnet-104104
	I0914 23:09:00.864676   53243 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 23:09:00.864717   53243 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0914 23:09:00.864726   53243 cache.go:57] Caching tarball of preloaded images
	I0914 23:09:00.864827   53243 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 23:09:00.864842   53243 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 23:09:00.864939   53243 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/config.json ...
	I0914 23:09:00.864971   53243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/config.json: {Name:mk98db160c540fa79ee5de85a39c560add104c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:00.865095   53243 start.go:365] acquiring machines lock for kindnet-104104: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:09:00.865121   53243 start.go:369] acquired machines lock for "kindnet-104104" in 14.384µs
	I0914 23:09:00.865190   53243 start.go:93] Provisioning new machine with config: &{Name:kindnet-104104 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:kindnet-104104 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 23:09:00.865302   53243 start.go:125] createHost starting for "" (driver="kvm2")
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:46:13 UTC, ends at Thu 2023-09-14 23:09:03 UTC. --
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.105384482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=36051015-550e-4901-aa1a-3591338d32dd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.105715369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=36051015-550e-4901-aa1a-3591338d32dd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.136149647Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=82dcae38-60fa-4526-b1dc-749491e5e12a name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.136413707Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&PodSandboxMetadata{Name:busybox,Uid:012aa3b5-77e6-4f18-a715-0b2b77e4caf8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731615303640666,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:46:47.338046708Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-8phxz,Uid:45bf5b67-3fc3-4aa7-90a0-2a2957384380,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:169473
1615000870355,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:46:47.338047770Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1ca20be8af2c9ef05b857598f1736a0cab9287ba3ffa9bf67914c5d0f5518e17,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-hfgp8,Uid:09b0d4cf-ab11-4677-88c4-f530af4643e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731611403460644,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-hfgp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09b0d4cf-ab11-4677-88c4-f530af4643e1,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14
T22:46:47.338044233Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731607688292678,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-14T22:46:47.338045408Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&PodSandboxMetadata{Name:kube-proxy-j2qmv,Uid:ca04e473-7bc4-4d56-ade1-0ae559f40dc9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731607684034748,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca04e473-7bc4-4d56-ade1-0ae559f40dc9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2023-09-14T22:46:47.338038508Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-799144,Uid:a01685043f02c1752cc818897c65fee3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731600876927320,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a01685043f02c1752cc818897c65fee3,kubernetes.io/config.seen: 2023-09-14T22:46:40.339196779Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-79
9144,Uid:0c563be4e3599500e857b86431f33760,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731600860303147,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c563be4e3599500e857b86431f33760,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0c563be4e3599500e857b86431f33760,kubernetes.io/config.seen: 2023-09-14T22:46:40.339195549Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-799144,Uid:bd18e0cb5393d8437d879abb73f5beea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731600852018677,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-def
ault-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.175:8444,kubernetes.io/config.hash: bd18e0cb5393d8437d879abb73f5beea,kubernetes.io/config.seen: 2023-09-14T22:46:40.339191908Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-799144,Uid:80294e3a8555a1593a1f189f3871c227,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731600840565533,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clie
nt-urls: https://192.168.50.175:2379,kubernetes.io/config.hash: 80294e3a8555a1593a1f189f3871c227,kubernetes.io/config.seen: 2023-09-14T22:46:40.339197599Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=82dcae38-60fa-4526-b1dc-749491e5e12a name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.137579040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=67387131-771f-4a2d-9d60-2d07d32aae93 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.137630953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=67387131-771f-4a2d-9d60-2d07d32aae93 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.137834737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=67387131-771f-4a2d-9d60-2d07d32aae93 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.141460140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0e7e67a5-0f66-4671-824c-9fb618e6510c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.141541147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0e7e67a5-0f66-4671-824c-9fb618e6510c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.141802117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0e7e67a5-0f66-4671-824c-9fb618e6510c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.174262200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=96e3b639-fb75-4387-8f42-bfbd4bda802b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.174365693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=96e3b639-fb75-4387-8f42-bfbd4bda802b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.174617291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96e3b639-fb75-4387-8f42-bfbd4bda802b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.209301813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0d4ca945-9c75-4088-9918-5edc12e5087d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.209415183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0d4ca945-9c75-4088-9918-5edc12e5087d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.209819027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0d4ca945-9c75-4088-9918-5edc12e5087d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.245934120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fe50c223-bf47-40da-b638-5af8e126abf9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.246053965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fe50c223-bf47-40da-b638-5af8e126abf9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.246240991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fe50c223-bf47-40da-b638-5af8e126abf9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.294220369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b99ba690-804d-4228-bf8d-1341d44b838d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.294310401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b99ba690-804d-4228-bf8d-1341d44b838d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.294546900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b99ba690-804d-4228-bf8d-1341d44b838d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.332868386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4bbb9e1c-1e7b-47e0-80ab-382c7f5e3e0c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.333098224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4bbb9e1c-1e7b-47e0-80ab-382c7f5e3e0c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:09:03 default-k8s-diff-port-799144 crio[707]: time="2023-09-14 23:09:03.333322436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731639613343140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099955c517e1f5b1e14a77cebc6256514bee6757a767306f8fb1d2d77a2988b2,PodSandboxId:88a2d3d4437e5eebfc5c1ae4fd4ffcc28d1b5d12c552c6df05d4deb6364bb544,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731618861476124,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 012aa3b5-77e6-4f18-a715-0b2b77e4caf8,},Annotations:map[string]string{io.kubernetes.container.hash: 646bd23b,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b,PodSandboxId:130c356cb6471a277f54233d9493c2f361d5f5a243336cb382410084327e61c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731615658787904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8phxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45bf5b67-3fc3-4aa7-90a0-2a2957384380,},Annotations:map[string]string{io.kubernetes.container.hash: bf8497f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb,PodSandboxId:c10b5135af26c3257ee3e3b7219f70790897bda3810b8f469569243cc81ea947,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731608381938721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2qmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca04e473-7bc4-4d56-ade1-0ae559f40dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d52648c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc,PodSandboxId:ce40ecb757b401f91ac75106eb6b684198178da2633d95eb5412e95131fef159,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694731608297526203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
cb8a357-0b1f-41ad-b5ba-dea4f1a690c7,},Annotations:map[string]string{io.kubernetes.container.hash: fe0efdcc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0,PodSandboxId:c9096b8ed93e7c179ec7d743eda3f65cbf1a190e7990213a7ac0fc8812e50664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731602027228389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80294e3a8555a1593a1f189f3871c227,},An
notations:map[string]string{io.kubernetes.container.hash: 5627a5f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c,PodSandboxId:aaa2117b4c309c1b3c87089c329fed57aecb6b3010ec61e5aa829a361dd7e096,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731601723023915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01685043f02c1752cc818897c65fee3,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019,PodSandboxId:83df4bc3f4baf7c99e434d66d7413b27ddbe8d13b6f361844363f407eca6a211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731601582751576,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd18e0cb5393d8437d879abb73f5beea,},An
notations:map[string]string{io.kubernetes.container.hash: 8dd3792c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2,PodSandboxId:5ed2f39d120a2268f2bc924d37d6a550fe11378b80345d1304a6640149e627f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731601301277419,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
c563be4e3599500e857b86431f33760,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4bbb9e1c-1e7b-47e0-80ab-382c7f5e3e0c name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	f5ece5e451cf6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   ce40ecb757b40
	099955c517e1f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   88a2d3d4437e5
	809210de2cd64       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      22 minutes ago      Running             coredns                   1                   130c356cb6471
	da519760d06f2       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      22 minutes ago      Running             kube-proxy                1                   c10b5135af26c
	5a644b09188e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   ce40ecb757b40
	95a2e35f25145       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      22 minutes ago      Running             etcd                      1                   c9096b8ed93e7
	8e23190d2ef54       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      22 minutes ago      Running             kube-scheduler            1                   aaa2117b4c309
	f149a35f98826       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      22 minutes ago      Running             kube-apiserver            1                   83df4bc3f4baf
	dae1ba10c6d57       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      22 minutes ago      Running             kube-controller-manager   1                   5ed2f39d120a2
	
	* 
	* ==> coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50917 - 63898 "HINFO IN 8693031495787485691.1317873420319016237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006894794s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-799144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-799144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=default-k8s-diff-port-799144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_39_45_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:39:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-799144
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 23:08:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 23:07:42 +0000   Thu, 14 Sep 2023 22:39:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 23:07:42 +0000   Thu, 14 Sep 2023 22:39:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 23:07:42 +0000   Thu, 14 Sep 2023 22:39:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 23:07:42 +0000   Thu, 14 Sep 2023 22:46:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.175
	  Hostname:    default-k8s-diff-port-799144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4a9f75a6867453fb762cb9af543d17a
	  System UUID:                b4a9f75a-6867-453f-b762-cb9af543d17a
	  Boot ID:                    79147eff-56bd-419b-a416-69d8f252b3e9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 coredns-5dd5756b68-8phxz                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-799144                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-799144             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-799144    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-j2qmv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-799144             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-hfgp8                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-799144 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-799144 event: Registered Node default-k8s-diff-port-799144 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-799144 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-799144 event: Registered Node default-k8s-diff-port-799144 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep14 22:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.211257] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.796993] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135698] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.454178] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.838915] systemd-fstab-generator[634]: Ignoring "noauto" for root device
	[  +0.111182] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.129885] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.121099] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.194586] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[ +16.977526] systemd-fstab-generator[905]: Ignoring "noauto" for root device
	[ +14.956871] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] <==
	* {"level":"info","ts":"2023-09-14T22:46:52.595673Z","caller":"traceutil/trace.go:171","msg":"trace[479671844] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-799144; range_end:; response_count:1; response_revision:511; }","duration":"104.105269ms","start":"2023-09-14T22:46:52.491562Z","end":"2023-09-14T22:46:52.595667Z","steps":["trace[479671844] 'agreement among raft nodes before linearized reading'  (duration: 103.981306ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:46:52.59588Z","caller":"traceutil/trace.go:171","msg":"trace[288169016] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"111.684904ms","start":"2023-09-14T22:46:52.484188Z","end":"2023-09-14T22:46:52.595873Z","steps":["trace[288169016] 'process raft request'  (duration: 111.179449ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:47:38.456548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.414694ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2373267904961486725 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" mod_revision:562 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-14T22:47:38.456807Z","caller":"traceutil/trace.go:171","msg":"trace[1089433272] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"123.842606ms","start":"2023-09-14T22:47:38.332934Z","end":"2023-09-14T22:47:38.456776Z","steps":["trace[1089433272] 'process raft request'  (duration: 123.808849ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:47:38.457174Z","caller":"traceutil/trace.go:171","msg":"trace[419564732] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"310.858481ms","start":"2023-09-14T22:47:38.146297Z","end":"2023-09-14T22:47:38.457156Z","steps":["trace[419564732] 'process raft request'  (duration: 194.63582ms)","trace[419564732] 'compare'  (duration: 115.311609ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T22:47:38.457261Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T22:47:38.146282Z","time spent":"310.935409ms","remote":"127.0.0.1:36340","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":600,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" mod_revision:562 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" > >"}
	{"level":"info","ts":"2023-09-14T22:47:38.457477Z","caller":"traceutil/trace.go:171","msg":"trace[995560540] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"239.590026ms","start":"2023-09-14T22:47:38.217877Z","end":"2023-09-14T22:47:38.457467Z","steps":["trace[995560540] 'process raft request'  (duration: 238.793725ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:56:45.249916Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":786}
	{"level":"info","ts":"2023-09-14T22:56:45.255374Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":786,"took":"4.513283ms","hash":2275629792}
	{"level":"info","ts":"2023-09-14T22:56:45.255539Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2275629792,"revision":786,"compact-revision":-1}
	{"level":"info","ts":"2023-09-14T23:01:45.259181Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1028}
	{"level":"info","ts":"2023-09-14T23:01:45.261063Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1028,"took":"1.430204ms","hash":2809314105}
	{"level":"info","ts":"2023-09-14T23:01:45.261144Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2809314105,"revision":1028,"compact-revision":786}
	{"level":"info","ts":"2023-09-14T23:06:45.268476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1270}
	{"level":"info","ts":"2023-09-14T23:06:45.270641Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1270,"took":"1.794022ms","hash":2523408157}
	{"level":"info","ts":"2023-09-14T23:06:45.270709Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2523408157,"revision":1270,"compact-revision":1028}
	{"level":"info","ts":"2023-09-14T23:07:34.025391Z","caller":"traceutil/trace.go:171","msg":"trace[323043754] transaction","detail":"{read_only:false; response_revision:1555; number_of_response:1; }","duration":"394.688578ms","start":"2023-09-14T23:07:33.630647Z","end":"2023-09-14T23:07:34.025335Z","steps":["trace[323043754] 'process raft request'  (duration: 394.41326ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T23:07:34.025803Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T23:07:33.630626Z","time spent":"394.93202ms","remote":"127.0.0.1:36340","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" mod_revision:1547 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-799144\" > >"}
	{"level":"info","ts":"2023-09-14T23:07:34.12059Z","caller":"traceutil/trace.go:171","msg":"trace[1147791235] linearizableReadLoop","detail":"{readStateIndex:1847; appliedIndex:1846; }","duration":"245.360039ms","start":"2023-09-14T23:07:33.875217Z","end":"2023-09-14T23:07:34.120577Z","steps":["trace[1147791235] 'read index received'  (duration: 150.881439ms)","trace[1147791235] 'applied index is now lower than readState.Index'  (duration: 94.47772ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T23:07:34.120762Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.547304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-14T23:07:34.120836Z","caller":"traceutil/trace.go:171","msg":"trace[590796136] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1555; }","duration":"245.634556ms","start":"2023-09-14T23:07:33.875188Z","end":"2023-09-14T23:07:34.120822Z","steps":["trace[590796136] 'agreement among raft nodes before linearized reading'  (duration: 245.505327ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T23:07:57.87043Z","caller":"traceutil/trace.go:171","msg":"trace[180036032] transaction","detail":"{read_only:false; response_revision:1574; number_of_response:1; }","duration":"153.2022ms","start":"2023-09-14T23:07:57.717188Z","end":"2023-09-14T23:07:57.87039Z","steps":["trace[180036032] 'process raft request'  (duration: 152.66172ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T23:07:58.516375Z","caller":"traceutil/trace.go:171","msg":"trace[1461618231] transaction","detail":"{read_only:false; response_revision:1575; number_of_response:1; }","duration":"211.802389ms","start":"2023-09-14T23:07:58.304557Z","end":"2023-09-14T23:07:58.516359Z","steps":["trace[1461618231] 'process raft request'  (duration: 207.791457ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T23:08:00.055511Z","caller":"traceutil/trace.go:171","msg":"trace[1808289766] transaction","detail":"{read_only:false; response_revision:1576; number_of_response:1; }","duration":"176.404929ms","start":"2023-09-14T23:07:59.879086Z","end":"2023-09-14T23:08:00.055491Z","steps":["trace[1808289766] 'process raft request'  (duration: 176.278786ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T23:08:34.545291Z","caller":"traceutil/trace.go:171","msg":"trace[364629171] transaction","detail":"{read_only:false; response_revision:1605; number_of_response:1; }","duration":"207.212392ms","start":"2023-09-14T23:08:34.33805Z","end":"2023-09-14T23:08:34.545262Z","steps":["trace[364629171] 'process raft request'  (duration: 206.706079ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:09:03 up 22 min,  0 users,  load average: 0.08, 0.14, 0.10
	Linux default-k8s-diff-port-799144 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] <==
	* I0914 23:06:46.691172       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.35.95:443: connect: connection refused
	I0914 23:06:46.691223       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 23:06:46.847351       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:06:46.847489       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:06:46.848289       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.35.95:443: connect: connection refused
	I0914 23:06:46.848307       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 23:06:47.848161       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:06:47.848251       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:06:47.848259       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 23:06:47.848416       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:06:47.848492       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 23:06:47.849686       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:07:46.690293       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.35.95:443: connect: connection refused
	I0914 23:07:46.690382       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 23:07:47.850200       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:07:47.850719       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:07:47.850832       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 23:07:47.851418       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:07:47.851478       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 23:07:47.852661       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:08:46.689645       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.35.95:443: connect: connection refused
	I0914 23:08:46.689797       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] <==
	* I0914 23:03:32.252627       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:04:01.688581       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:04:02.260579       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:04:31.695653       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:04:32.269068       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:05:01.701779       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:05:02.280831       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:05:31.708776       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:05:32.289942       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:06:01.715396       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:06:02.302040       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:06:31.722400       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:06:32.310746       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:07:01.730311       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:07:02.323196       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:07:31.738477       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:07:32.334043       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:08:01.745918       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:08:02.343920       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 23:08:04.394479       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="769.196µs"
	I0914 23:08:16.388504       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="205.003µs"
	E0914 23:08:31.758482       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:08:32.355269       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:09:01.763819       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:09:02.367343       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] <==
	* I0914 22:46:49.480198       1 server_others.go:69] "Using iptables proxy"
	I0914 22:46:49.925014       1 node.go:141] Successfully retrieved node IP: 192.168.50.175
	I0914 22:46:49.967675       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:46:49.967813       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:46:49.970819       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:46:49.970916       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:46:49.971364       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:46:49.971585       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:46:49.972452       1 config.go:188] "Starting service config controller"
	I0914 22:46:49.972494       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:46:49.972515       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:46:49.972519       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:46:49.973044       1 config.go:315] "Starting node config controller"
	I0914 22:46:49.973073       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:46:50.072837       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 22:46:50.073027       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:46:50.073287       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] <==
	* I0914 22:46:44.200410       1 serving.go:348] Generated self-signed cert in-memory
	W0914 22:46:46.810859       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 22:46:46.810904       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:46:46.810920       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:46:46.810926       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:46:46.845927       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:46:46.846087       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:46:46.847291       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:46:46.847372       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:46:46.848081       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:46:46.848156       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:46:46.947663       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:46:13 UTC, ends at Thu 2023-09-14 23:09:03 UTC. --
	Sep 14 23:06:40 default-k8s-diff-port-799144 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:06:40 default-k8s-diff-port-799144 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:06:40 default-k8s-diff-port-799144 kubelet[911]: E0914 23:06:40.408742     911 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Sep 14 23:06:46 default-k8s-diff-port-799144 kubelet[911]: E0914 23:06:46.373028     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:06:58 default-k8s-diff-port-799144 kubelet[911]: E0914 23:06:58.373684     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:07:12 default-k8s-diff-port-799144 kubelet[911]: E0914 23:07:12.374325     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:07:24 default-k8s-diff-port-799144 kubelet[911]: E0914 23:07:24.374916     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:07:39 default-k8s-diff-port-799144 kubelet[911]: E0914 23:07:39.373660     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:07:40 default-k8s-diff-port-799144 kubelet[911]: E0914 23:07:40.389292     911 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 23:07:40 default-k8s-diff-port-799144 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:07:40 default-k8s-diff-port-799144 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:07:40 default-k8s-diff-port-799144 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:07:50 default-k8s-diff-port-799144 kubelet[911]: E0914 23:07:50.404316     911 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 14 23:07:50 default-k8s-diff-port-799144 kubelet[911]: E0914 23:07:50.404391     911 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 14 23:07:50 default-k8s-diff-port-799144 kubelet[911]: E0914 23:07:50.405057     911 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2jmch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-hfgp8_kube-system(09b0d4cf-ab11-4677-88c4-f530af4643e1): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 14 23:07:50 default-k8s-diff-port-799144 kubelet[911]: E0914 23:07:50.405152     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:08:04 default-k8s-diff-port-799144 kubelet[911]: E0914 23:08:04.375234     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:08:16 default-k8s-diff-port-799144 kubelet[911]: E0914 23:08:16.373566     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:08:28 default-k8s-diff-port-799144 kubelet[911]: E0914 23:08:28.373907     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:08:40 default-k8s-diff-port-799144 kubelet[911]: E0914 23:08:40.390185     911 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 23:08:40 default-k8s-diff-port-799144 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:08:40 default-k8s-diff-port-799144 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:08:40 default-k8s-diff-port-799144 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:08:41 default-k8s-diff-port-799144 kubelet[911]: E0914 23:08:41.372844     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	Sep 14 23:08:52 default-k8s-diff-port-799144 kubelet[911]: E0914 23:08:52.374338     911 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hfgp8" podUID="09b0d4cf-ab11-4677-88c4-f530af4643e1"
	
	* 
	* ==> storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] <==
	* I0914 22:46:49.064878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 22:47:19.068661       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] <==
	* I0914 22:47:19.720941       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:47:19.737711       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:47:19.737767       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:47:37.145640       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:47:37.145884       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-799144_6156d333-5706-43bc-93d7-6bfcc42511b8!
	I0914 22:47:37.147833       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d62f02f3-7ad6-456b-a5fd-2b92f0ceaac6", APIVersion:"v1", ResourceVersion:"569", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-799144_6156d333-5706-43bc-93d7-6bfcc42511b8 became leader
	I0914 22:47:37.246044       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-799144_6156d333-5706-43bc-93d7-6bfcc42511b8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-799144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-hfgp8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-799144 describe pod metrics-server-57f55c9bc5-hfgp8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-799144 describe pod metrics-server-57f55c9bc5-hfgp8: exit status 1 (69.048807ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-hfgp8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-799144 describe pod metrics-server-57f55c9bc5-hfgp8: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (525.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-588699 -n embed-certs-588699
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-14 23:10:24.824075584 +0000 UTC m=+5647.018417220
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-588699 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-588699 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (105.422577ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-588699 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-588699 -n embed-certs-588699
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-588699 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-588699 logs -n 25: (1.392908936s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-104104 sudo systemctl                        | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | cat kubelet --no-pager                               |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo journalctl                       | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo cat                              | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo cat                              | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo systemctl                        | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo systemctl                        | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo cat                              | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo docker                           | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo systemctl                        | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo systemctl                        | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo cat                              | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo cat                              | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo                                  | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo systemctl                        | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo systemctl                        | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo cat                              | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo cat                              | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo containerd                       | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo systemctl                        | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo systemctl                        | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo find                             | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-104104 sudo crio                             | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-104104                                       | auto-104104           | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	| start   | -p custom-flannel-104104                             | custom-flannel-104104 | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-104104 pgrep -a                           | kindnet-104104        | jenkins | v1.31.2 | 14 Sep 23 23:10 UTC | 14 Sep 23 23:10 UTC |
	|         | kubelet                                              |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 23:09:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 23:09:21.387900   54941 out.go:296] Setting OutFile to fd 1 ...
	I0914 23:09:21.388143   54941 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:09:21.388153   54941 out.go:309] Setting ErrFile to fd 2...
	I0914 23:09:21.388157   54941 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:09:21.388307   54941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 23:09:21.388854   54941 out.go:303] Setting JSON to false
	I0914 23:09:21.389845   54941 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6704,"bootTime":1694726258,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 23:09:21.389895   54941 start.go:138] virtualization: kvm guest
	I0914 23:09:21.391979   54941 out.go:177] * [custom-flannel-104104] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 23:09:21.393472   54941 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 23:09:21.393524   54941 notify.go:220] Checking for updates...
	I0914 23:09:21.394661   54941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:09:21.395955   54941 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 23:09:21.397250   54941 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 23:09:21.398457   54941 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 23:09:21.399788   54941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:09:21.401518   54941 config.go:182] Loaded profile config "calico-104104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:09:21.401617   54941 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:09:21.401704   54941 config.go:182] Loaded profile config "kindnet-104104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:09:21.401773   54941 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 23:09:21.436791   54941 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 23:09:21.438087   54941 start.go:298] selected driver: kvm2
	I0914 23:09:21.438096   54941 start.go:902] validating driver "kvm2" against <nil>
	I0914 23:09:21.438111   54941 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:09:21.438718   54941 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:09:21.438782   54941 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 23:09:21.452969   54941 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 23:09:21.453009   54941 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 23:09:21.453201   54941 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 23:09:21.453231   54941 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0914 23:09:21.453240   54941 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0914 23:09:21.453262   54941 start_flags.go:321] config:
	{Name:custom-flannel-104104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-104104 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:09:21.453387   54941 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:09:21.455264   54941 out.go:177] * Starting control plane node custom-flannel-104104 in cluster custom-flannel-104104
	I0914 23:09:25.095670   53573 start.go:369] acquired machines lock for "calico-104104" in 19.515898723s
	I0914 23:09:25.095727   53573 start.go:93] Provisioning new machine with config: &{Name:calico-104104 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:calico-104104 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 23:09:25.095874   53573 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 23:09:25.098901   53573 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:09:25.099113   53573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:09:25.099160   53573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:09:25.118506   53573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I0914 23:09:25.118855   53573 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:09:25.119403   53573 main.go:141] libmachine: Using API Version  1
	I0914 23:09:25.119418   53573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:09:25.119793   53573 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:09:25.120010   53573 main.go:141] libmachine: (calico-104104) Calling .GetMachineName
	I0914 23:09:25.120191   53573 main.go:141] libmachine: (calico-104104) Calling .DriverName
	I0914 23:09:25.120343   53573 start.go:159] libmachine.API.Create for "calico-104104" (driver="kvm2")
	I0914 23:09:25.120390   53573 client.go:168] LocalClient.Create starting
	I0914 23:09:25.120424   53573 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem
	I0914 23:09:25.120458   53573 main.go:141] libmachine: Decoding PEM data...
	I0914 23:09:25.120480   53573 main.go:141] libmachine: Parsing certificate...
	I0914 23:09:25.120543   53573 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem
	I0914 23:09:25.120573   53573 main.go:141] libmachine: Decoding PEM data...
	I0914 23:09:25.120598   53573 main.go:141] libmachine: Parsing certificate...
	I0914 23:09:25.120628   53573 main.go:141] libmachine: Running pre-create checks...
	I0914 23:09:25.120642   53573 main.go:141] libmachine: (calico-104104) Calling .PreCreateCheck
	I0914 23:09:25.120985   53573 main.go:141] libmachine: (calico-104104) Calling .GetConfigRaw
	I0914 23:09:25.121365   53573 main.go:141] libmachine: Creating machine...
	I0914 23:09:25.121380   53573 main.go:141] libmachine: (calico-104104) Calling .Create
	I0914 23:09:25.121511   53573 main.go:141] libmachine: (calico-104104) Creating KVM machine...
	I0914 23:09:25.122514   53573 main.go:141] libmachine: (calico-104104) DBG | found existing default KVM network
	I0914 23:09:25.123786   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:25.123639   55002 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015e50}
	I0914 23:09:25.129039   53573 main.go:141] libmachine: (calico-104104) DBG | trying to create private KVM network mk-calico-104104 192.168.39.0/24...
	I0914 23:09:25.201899   53573 main.go:141] libmachine: (calico-104104) DBG | private KVM network mk-calico-104104 192.168.39.0/24 created
	I0914 23:09:25.201931   53573 main.go:141] libmachine: (calico-104104) Setting up store path in /home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104 ...
	I0914 23:09:25.201947   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:25.201858   55002 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 23:09:25.201977   53573 main.go:141] libmachine: (calico-104104) Building disk image from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso
	I0914 23:09:25.202062   53573 main.go:141] libmachine: (calico-104104) Downloading /home/jenkins/minikube-integration/17243-6287/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso...
	I0914 23:09:25.417569   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:25.417467   55002 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/id_rsa...
	I0914 23:09:23.500607   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:23.501156   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has current primary IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:23.501190   53243 main.go:141] libmachine: (kindnet-104104) Found IP for machine: 192.168.72.231
	I0914 23:09:23.501201   53243 main.go:141] libmachine: (kindnet-104104) Reserving static IP address...
	I0914 23:09:23.501671   53243 main.go:141] libmachine: (kindnet-104104) DBG | unable to find host DHCP lease matching {name: "kindnet-104104", mac: "52:54:00:2c:42:8b", ip: "192.168.72.231"} in network mk-kindnet-104104
	I0914 23:09:23.574643   53243 main.go:141] libmachine: (kindnet-104104) Reserved static IP address: 192.168.72.231
	I0914 23:09:23.574682   53243 main.go:141] libmachine: (kindnet-104104) DBG | Getting to WaitForSSH function...
	I0914 23:09:23.574692   53243 main.go:141] libmachine: (kindnet-104104) Waiting for SSH to be available...
	I0914 23:09:23.577509   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:23.577900   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:23.577935   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:23.578049   53243 main.go:141] libmachine: (kindnet-104104) DBG | Using SSH client type: external
	I0914 23:09:23.578071   53243 main.go:141] libmachine: (kindnet-104104) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/kindnet-104104/id_rsa (-rw-------)
	I0914 23:09:23.578105   53243 main.go:141] libmachine: (kindnet-104104) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/kindnet-104104/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 23:09:23.578120   53243 main.go:141] libmachine: (kindnet-104104) DBG | About to run SSH command:
	I0914 23:09:23.578133   53243 main.go:141] libmachine: (kindnet-104104) DBG | exit 0
	I0914 23:09:23.663332   53243 main.go:141] libmachine: (kindnet-104104) DBG | SSH cmd err, output: <nil>: 
	I0914 23:09:23.663623   53243 main.go:141] libmachine: (kindnet-104104) KVM machine creation complete!
	I0914 23:09:23.663910   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetConfigRaw
	I0914 23:09:23.664457   53243 main.go:141] libmachine: (kindnet-104104) Calling .DriverName
	I0914 23:09:23.664642   53243 main.go:141] libmachine: (kindnet-104104) Calling .DriverName
	I0914 23:09:23.664790   53243 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 23:09:23.664807   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetState
	I0914 23:09:23.666225   53243 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 23:09:23.666239   53243 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 23:09:23.666246   53243 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 23:09:23.666252   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:23.668529   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:23.668959   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:23.668986   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:23.669159   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:23.669338   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:23.669499   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:23.669648   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:23.669821   53243 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:23.670198   53243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0914 23:09:23.670212   53243 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 23:09:23.782319   53243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:09:23.782347   53243 main.go:141] libmachine: Detecting the provisioner...
	I0914 23:09:23.782359   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:23.785198   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:23.785538   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:23.785562   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:23.785724   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:23.785906   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:23.786065   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:23.786233   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:23.786428   53243 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:23.786770   53243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0914 23:09:23.786784   53243 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 23:09:23.896082   53243 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g52d8811-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0914 23:09:23.896191   53243 main.go:141] libmachine: found compatible host: buildroot
	I0914 23:09:23.896207   53243 main.go:141] libmachine: Provisioning with buildroot...
	I0914 23:09:23.896219   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetMachineName
	I0914 23:09:23.896494   53243 buildroot.go:166] provisioning hostname "kindnet-104104"
	I0914 23:09:23.896521   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetMachineName
	I0914 23:09:23.896702   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:23.899545   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:23.899953   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:23.899983   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:23.900124   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:23.900301   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:23.900437   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:23.900558   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:23.900674   53243 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:23.901051   53243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0914 23:09:23.901072   53243 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-104104 && echo "kindnet-104104" | sudo tee /etc/hostname
	I0914 23:09:24.022636   53243 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-104104
	
	I0914 23:09:24.022662   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:24.025607   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.025989   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:24.026013   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.026171   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:24.026345   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:24.026511   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:24.026648   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:24.026837   53243 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:24.027159   53243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0914 23:09:24.027180   53243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-104104' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-104104/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-104104' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:09:24.142657   53243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:09:24.142683   53243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 23:09:24.142718   53243 buildroot.go:174] setting up certificates
	I0914 23:09:24.142730   53243 provision.go:83] configureAuth start
	I0914 23:09:24.142744   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetMachineName
	I0914 23:09:24.143007   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetIP
	I0914 23:09:24.145565   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.146004   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:24.146035   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.146182   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:24.148164   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.148470   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:24.148499   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.148646   53243 provision.go:138] copyHostCerts
	I0914 23:09:24.148721   53243 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 23:09:24.148734   53243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 23:09:24.148811   53243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 23:09:24.148956   53243 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 23:09:24.148969   53243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 23:09:24.149010   53243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 23:09:24.149098   53243 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 23:09:24.149108   53243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 23:09:24.149142   53243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 23:09:24.149202   53243 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.kindnet-104104 san=[192.168.72.231 192.168.72.231 localhost 127.0.0.1 minikube kindnet-104104]
	I0914 23:09:24.394451   53243 provision.go:172] copyRemoteCerts
	I0914 23:09:24.394517   53243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:09:24.394544   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:24.397184   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.397577   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:24.397614   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.397724   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:24.397924   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:24.398081   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:24.398190   53243 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/kindnet-104104/id_rsa Username:docker}
	I0914 23:09:24.485045   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 23:09:24.505275   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 23:09:24.525274   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 23:09:24.545659   53243 provision.go:86] duration metric: configureAuth took 402.916002ms
	I0914 23:09:24.545682   53243 buildroot.go:189] setting minikube options for container-runtime
	I0914 23:09:24.545839   53243 config.go:182] Loaded profile config "kindnet-104104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:09:24.545912   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:24.548590   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.548967   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:24.549000   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.549135   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:24.549340   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:24.549505   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:24.549666   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:24.549826   53243 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:24.550182   53243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0914 23:09:24.550201   53243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 23:09:24.843079   53243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 23:09:24.843112   53243 main.go:141] libmachine: Checking connection to Docker...
	I0914 23:09:24.843122   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetURL
	I0914 23:09:24.844357   53243 main.go:141] libmachine: (kindnet-104104) DBG | Using libvirt version 6000000
	I0914 23:09:24.846418   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.846715   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:24.846745   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.847023   53243 main.go:141] libmachine: Docker is up and running!
	I0914 23:09:24.847049   53243 main.go:141] libmachine: Reticulating splines...
	I0914 23:09:24.847057   53243 client.go:171] LocalClient.Create took 23.963989114s
	I0914 23:09:24.847090   53243 start.go:167] duration metric: libmachine.API.Create for "kindnet-104104" took 23.964060644s
	I0914 23:09:24.847104   53243 start.go:300] post-start starting for "kindnet-104104" (driver="kvm2")
	I0914 23:09:24.847115   53243 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:09:24.847139   53243 main.go:141] libmachine: (kindnet-104104) Calling .DriverName
	I0914 23:09:24.847434   53243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:09:24.847488   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:24.850022   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.850421   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:24.850491   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.850543   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:24.850735   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:24.850900   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:24.851077   53243 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/kindnet-104104/id_rsa Username:docker}
	I0914 23:09:24.938677   53243 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:09:24.943120   53243 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 23:09:24.943143   53243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 23:09:24.943219   53243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 23:09:24.943293   53243 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 23:09:24.943385   53243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:09:24.953638   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 23:09:24.977253   53243 start.go:303] post-start completed in 130.134766ms
	I0914 23:09:24.977303   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetConfigRaw
	I0914 23:09:24.977843   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetIP
	I0914 23:09:24.980271   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.980620   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:24.980658   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.980892   53243 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/config.json ...
	I0914 23:09:24.981100   53243 start.go:128] duration metric: createHost completed in 24.115788061s
	I0914 23:09:24.981121   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:24.983525   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.983841   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:24.983877   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:24.984012   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:24.984215   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:24.984379   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:24.984497   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:24.984686   53243 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:24.984989   53243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.231 22 <nil> <nil>}
	I0914 23:09:24.985001   53243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 23:09:25.095495   53243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694732965.068379220
	
	I0914 23:09:25.095518   53243 fix.go:206] guest clock: 1694732965.068379220
	I0914 23:09:25.095528   53243 fix.go:219] Guest: 2023-09-14 23:09:25.06837922 +0000 UTC Remote: 2023-09-14 23:09:24.981110706 +0000 UTC m=+24.228086774 (delta=87.268514ms)
	I0914 23:09:25.095569   53243 fix.go:190] guest clock delta is within tolerance: 87.268514ms
	I0914 23:09:25.095577   53243 start.go:83] releasing machines lock for "kindnet-104104", held for 24.23044913s
	I0914 23:09:25.095605   53243 main.go:141] libmachine: (kindnet-104104) Calling .DriverName
	I0914 23:09:25.095893   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetIP
	I0914 23:09:25.098405   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:25.098774   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:25.098801   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:25.099002   53243 main.go:141] libmachine: (kindnet-104104) Calling .DriverName
	I0914 23:09:25.099575   53243 main.go:141] libmachine: (kindnet-104104) Calling .DriverName
	I0914 23:09:25.099746   53243 main.go:141] libmachine: (kindnet-104104) Calling .DriverName
	I0914 23:09:25.099816   53243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:09:25.099863   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:25.099996   53243 ssh_runner.go:195] Run: cat /version.json
	I0914 23:09:25.100020   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:25.102691   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:25.102852   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:25.103029   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:25.103055   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:25.103199   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:25.103265   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:25.103288   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:25.103607   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:25.104852   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:25.104871   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:25.105217   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:25.105211   53243 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/kindnet-104104/id_rsa Username:docker}
	I0914 23:09:25.105400   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:25.105563   53243 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/kindnet-104104/id_rsa Username:docker}
	I0914 23:09:25.225330   53243 ssh_runner.go:195] Run: systemctl --version
	I0914 23:09:25.231147   53243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 23:09:25.386663   53243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 23:09:25.392888   53243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 23:09:25.392973   53243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:09:25.406322   53243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 23:09:25.406341   53243 start.go:469] detecting cgroup driver to use...
	I0914 23:09:25.406415   53243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:09:25.420147   53243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:09:25.434083   53243 docker.go:196] disabling cri-docker service (if available) ...
	I0914 23:09:25.434137   53243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 23:09:25.448487   53243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 23:09:25.461926   53243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 23:09:25.563557   53243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 23:09:25.691939   53243 docker.go:212] disabling docker service ...
	I0914 23:09:25.692001   53243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 23:09:25.706781   53243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 23:09:25.718458   53243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 23:09:21.456640   54941 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 23:09:21.456672   54941 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0914 23:09:21.456686   54941 cache.go:57] Caching tarball of preloaded images
	I0914 23:09:21.456790   54941 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 23:09:21.456807   54941 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 23:09:21.456897   54941 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/custom-flannel-104104/config.json ...
	I0914 23:09:21.456917   54941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/custom-flannel-104104/config.json: {Name:mk5125e52c02cafe8878ccd525ebeb62c7e2c693 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:21.457065   54941 start.go:365] acquiring machines lock for custom-flannel-104104: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 23:09:25.844369   53243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 23:09:25.966746   53243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 23:09:25.980183   53243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:09:25.996809   53243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 23:09:25.996866   53243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:09:26.005585   53243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 23:09:26.005648   53243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:09:26.016892   53243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:09:26.025512   53243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:09:26.033939   53243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 23:09:26.043081   53243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 23:09:26.050933   53243 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 23:09:26.051001   53243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 23:09:26.063370   53243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 23:09:26.073277   53243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:09:26.194796   53243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 23:09:26.374098   53243 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 23:09:26.374148   53243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 23:09:26.380264   53243 start.go:537] Will wait 60s for crictl version
	I0914 23:09:26.380311   53243 ssh_runner.go:195] Run: which crictl
	I0914 23:09:26.383858   53243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 23:09:26.416523   53243 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 23:09:26.416606   53243 ssh_runner.go:195] Run: crio --version
	I0914 23:09:26.455907   53243 ssh_runner.go:195] Run: crio --version
	I0914 23:09:26.508069   53243 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 23:09:25.654489   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:25.654317   55002 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/calico-104104.rawdisk...
	I0914 23:09:25.654523   53573 main.go:141] libmachine: (calico-104104) DBG | Writing magic tar header
	I0914 23:09:25.654545   53573 main.go:141] libmachine: (calico-104104) DBG | Writing SSH key tar header
	I0914 23:09:25.654566   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:25.654448   55002 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104 ...
	I0914 23:09:25.654588   53573 main.go:141] libmachine: (calico-104104) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104
	I0914 23:09:25.654649   53573 main.go:141] libmachine: (calico-104104) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines
	I0914 23:09:25.654673   53573 main.go:141] libmachine: (calico-104104) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 23:09:25.654687   53573 main.go:141] libmachine: (calico-104104) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104 (perms=drwx------)
	I0914 23:09:25.654709   53573 main.go:141] libmachine: (calico-104104) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287
	I0914 23:09:25.654728   53573 main.go:141] libmachine: (calico-104104) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 23:09:25.654741   53573 main.go:141] libmachine: (calico-104104) DBG | Checking permissions on dir: /home/jenkins
	I0914 23:09:25.654751   53573 main.go:141] libmachine: (calico-104104) DBG | Checking permissions on dir: /home
	I0914 23:09:25.654759   53573 main.go:141] libmachine: (calico-104104) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines (perms=drwxr-xr-x)
	I0914 23:09:25.654768   53573 main.go:141] libmachine: (calico-104104) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube (perms=drwxr-xr-x)
	I0914 23:09:25.654783   53573 main.go:141] libmachine: (calico-104104) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287 (perms=drwxrwxr-x)
	I0914 23:09:25.654802   53573 main.go:141] libmachine: (calico-104104) DBG | Skipping /home - not owner
	I0914 23:09:25.654818   53573 main.go:141] libmachine: (calico-104104) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 23:09:25.654835   53573 main.go:141] libmachine: (calico-104104) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 23:09:25.654847   53573 main.go:141] libmachine: (calico-104104) Creating domain...
	I0914 23:09:25.655930   53573 main.go:141] libmachine: (calico-104104) define libvirt domain using xml: 
	I0914 23:09:25.655953   53573 main.go:141] libmachine: (calico-104104) <domain type='kvm'>
	I0914 23:09:25.655966   53573 main.go:141] libmachine: (calico-104104)   <name>calico-104104</name>
	I0914 23:09:25.655981   53573 main.go:141] libmachine: (calico-104104)   <memory unit='MiB'>3072</memory>
	I0914 23:09:25.655995   53573 main.go:141] libmachine: (calico-104104)   <vcpu>2</vcpu>
	I0914 23:09:25.656008   53573 main.go:141] libmachine: (calico-104104)   <features>
	I0914 23:09:25.656021   53573 main.go:141] libmachine: (calico-104104)     <acpi/>
	I0914 23:09:25.656038   53573 main.go:141] libmachine: (calico-104104)     <apic/>
	I0914 23:09:25.656052   53573 main.go:141] libmachine: (calico-104104)     <pae/>
	I0914 23:09:25.656063   53573 main.go:141] libmachine: (calico-104104)     
	I0914 23:09:25.656076   53573 main.go:141] libmachine: (calico-104104)   </features>
	I0914 23:09:25.656088   53573 main.go:141] libmachine: (calico-104104)   <cpu mode='host-passthrough'>
	I0914 23:09:25.656114   53573 main.go:141] libmachine: (calico-104104)   
	I0914 23:09:25.656131   53573 main.go:141] libmachine: (calico-104104)   </cpu>
	I0914 23:09:25.656142   53573 main.go:141] libmachine: (calico-104104)   <os>
	I0914 23:09:25.656161   53573 main.go:141] libmachine: (calico-104104)     <type>hvm</type>
	I0914 23:09:25.656175   53573 main.go:141] libmachine: (calico-104104)     <boot dev='cdrom'/>
	I0914 23:09:25.656187   53573 main.go:141] libmachine: (calico-104104)     <boot dev='hd'/>
	I0914 23:09:25.656209   53573 main.go:141] libmachine: (calico-104104)     <bootmenu enable='no'/>
	I0914 23:09:25.656227   53573 main.go:141] libmachine: (calico-104104)   </os>
	I0914 23:09:25.656267   53573 main.go:141] libmachine: (calico-104104)   <devices>
	I0914 23:09:25.656291   53573 main.go:141] libmachine: (calico-104104)     <disk type='file' device='cdrom'>
	I0914 23:09:25.656317   53573 main.go:141] libmachine: (calico-104104)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/boot2docker.iso'/>
	I0914 23:09:25.656332   53573 main.go:141] libmachine: (calico-104104)       <target dev='hdc' bus='scsi'/>
	I0914 23:09:25.656346   53573 main.go:141] libmachine: (calico-104104)       <readonly/>
	I0914 23:09:25.656359   53573 main.go:141] libmachine: (calico-104104)     </disk>
	I0914 23:09:25.656375   53573 main.go:141] libmachine: (calico-104104)     <disk type='file' device='disk'>
	I0914 23:09:25.656394   53573 main.go:141] libmachine: (calico-104104)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 23:09:25.656413   53573 main.go:141] libmachine: (calico-104104)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/calico-104104.rawdisk'/>
	I0914 23:09:25.656427   53573 main.go:141] libmachine: (calico-104104)       <target dev='hda' bus='virtio'/>
	I0914 23:09:25.656441   53573 main.go:141] libmachine: (calico-104104)     </disk>
	I0914 23:09:25.656454   53573 main.go:141] libmachine: (calico-104104)     <interface type='network'>
	I0914 23:09:25.656468   53573 main.go:141] libmachine: (calico-104104)       <source network='mk-calico-104104'/>
	I0914 23:09:25.656484   53573 main.go:141] libmachine: (calico-104104)       <model type='virtio'/>
	I0914 23:09:25.656497   53573 main.go:141] libmachine: (calico-104104)     </interface>
	I0914 23:09:25.656512   53573 main.go:141] libmachine: (calico-104104)     <interface type='network'>
	I0914 23:09:25.656525   53573 main.go:141] libmachine: (calico-104104)       <source network='default'/>
	I0914 23:09:25.656539   53573 main.go:141] libmachine: (calico-104104)       <model type='virtio'/>
	I0914 23:09:25.656553   53573 main.go:141] libmachine: (calico-104104)     </interface>
	I0914 23:09:25.656566   53573 main.go:141] libmachine: (calico-104104)     <serial type='pty'>
	I0914 23:09:25.656587   53573 main.go:141] libmachine: (calico-104104)       <target port='0'/>
	I0914 23:09:25.656600   53573 main.go:141] libmachine: (calico-104104)     </serial>
	I0914 23:09:25.656614   53573 main.go:141] libmachine: (calico-104104)     <console type='pty'>
	I0914 23:09:25.656632   53573 main.go:141] libmachine: (calico-104104)       <target type='serial' port='0'/>
	I0914 23:09:25.656646   53573 main.go:141] libmachine: (calico-104104)     </console>
	I0914 23:09:25.656659   53573 main.go:141] libmachine: (calico-104104)     <rng model='virtio'>
	I0914 23:09:25.656673   53573 main.go:141] libmachine: (calico-104104)       <backend model='random'>/dev/random</backend>
	I0914 23:09:25.656686   53573 main.go:141] libmachine: (calico-104104)     </rng>
	I0914 23:09:25.656698   53573 main.go:141] libmachine: (calico-104104)     
	I0914 23:09:25.656712   53573 main.go:141] libmachine: (calico-104104)     
	I0914 23:09:25.656726   53573 main.go:141] libmachine: (calico-104104)   </devices>
	I0914 23:09:25.656738   53573 main.go:141] libmachine: (calico-104104) </domain>
	I0914 23:09:25.656767   53573 main.go:141] libmachine: (calico-104104) 
	I0914 23:09:25.660462   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:47:f7:ea in network default
	I0914 23:09:25.661029   53573 main.go:141] libmachine: (calico-104104) Ensuring networks are active...
	I0914 23:09:25.661052   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:25.661726   53573 main.go:141] libmachine: (calico-104104) Ensuring network default is active
	I0914 23:09:25.662062   53573 main.go:141] libmachine: (calico-104104) Ensuring network mk-calico-104104 is active
	I0914 23:09:25.662640   53573 main.go:141] libmachine: (calico-104104) Getting domain xml...
	I0914 23:09:25.663427   53573 main.go:141] libmachine: (calico-104104) Creating domain...
	I0914 23:09:27.082680   53573 main.go:141] libmachine: (calico-104104) Waiting to get IP...
	I0914 23:09:27.083715   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:27.084213   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:27.084264   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:27.084199   55002 retry.go:31] will retry after 268.562299ms: waiting for machine to come up
	I0914 23:09:27.354935   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:27.355535   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:27.355566   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:27.355452   55002 retry.go:31] will retry after 360.070276ms: waiting for machine to come up
	I0914 23:09:27.716993   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:27.717488   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:27.717512   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:27.717455   55002 retry.go:31] will retry after 469.097792ms: waiting for machine to come up
	I0914 23:09:28.188230   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:28.188849   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:28.188876   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:28.188770   55002 retry.go:31] will retry after 393.828094ms: waiting for machine to come up
	I0914 23:09:28.584311   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:28.584905   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:28.584945   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:28.584852   55002 retry.go:31] will retry after 645.250817ms: waiting for machine to come up
	I0914 23:09:29.231733   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:29.232160   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:29.232198   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:29.232122   55002 retry.go:31] will retry after 835.483257ms: waiting for machine to come up
	I0914 23:09:30.069727   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:30.070214   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:30.070247   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:30.070172   55002 retry.go:31] will retry after 780.501466ms: waiting for machine to come up
	I0914 23:09:26.509531   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetIP
	I0914 23:09:26.512620   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:26.513078   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:26.513122   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:26.513316   53243 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 23:09:26.517702   53243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 23:09:26.530440   53243 localpath.go:92] copying /home/jenkins/minikube-integration/17243-6287/.minikube/client.crt -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/client.crt
	I0914 23:09:26.530561   53243 localpath.go:117] copying /home/jenkins/minikube-integration/17243-6287/.minikube/client.key -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/client.key
	I0914 23:09:26.530655   53243 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 23:09:26.530693   53243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 23:09:26.564089   53243 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 23:09:26.564149   53243 ssh_runner.go:195] Run: which lz4
	I0914 23:09:26.568019   53243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 23:09:26.572483   53243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 23:09:26.572521   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 23:09:28.372023   53243 crio.go:444] Took 1.804033 seconds to copy over tarball
	I0914 23:09:28.372106   53243 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 23:09:30.852676   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:30.853122   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:30.853149   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:30.853106   55002 retry.go:31] will retry after 1.133645102s: waiting for machine to come up
	I0914 23:09:31.988524   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:31.989049   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:31.989091   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:31.988992   55002 retry.go:31] will retry after 1.713109838s: waiting for machine to come up
	I0914 23:09:33.704764   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:33.705228   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:33.705264   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:33.705187   55002 retry.go:31] will retry after 2.127678872s: waiting for machine to come up
	I0914 23:09:31.198272   53243 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.826119965s)
	I0914 23:09:31.198299   53243 crio.go:451] Took 2.826249 seconds to extract the tarball
	I0914 23:09:31.198311   53243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 23:09:31.238126   53243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 23:09:31.284952   53243 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 23:09:31.284980   53243 cache_images.go:84] Images are preloaded, skipping loading
	I0914 23:09:31.285058   53243 ssh_runner.go:195] Run: crio config
	I0914 23:09:31.342569   53243 cni.go:84] Creating CNI manager for "kindnet"
	I0914 23:09:31.342599   53243 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 23:09:31.342616   53243 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.231 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-104104 NodeName:kindnet-104104 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 23:09:31.342776   53243 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-104104"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 23:09:31.342841   53243 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kindnet-104104 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:kindnet-104104 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0914 23:09:31.342896   53243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 23:09:31.352247   53243 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 23:09:31.352301   53243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 23:09:31.361696   53243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0914 23:09:31.379541   53243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 23:09:31.397152   53243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0914 23:09:31.413975   53243 ssh_runner.go:195] Run: grep 192.168.72.231	control-plane.minikube.internal$ /etc/hosts
	I0914 23:09:31.417328   53243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 23:09:31.428175   53243 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104 for IP: 192.168.72.231
	I0914 23:09:31.428215   53243 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:31.428408   53243 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 23:09:31.428466   53243 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 23:09:31.428572   53243 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/client.key
	I0914 23:09:31.428609   53243 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.key.da7f9b78
	I0914 23:09:31.428642   53243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.crt.da7f9b78 with IP's: [192.168.72.231 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 23:09:31.555330   53243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.crt.da7f9b78 ...
	I0914 23:09:31.555361   53243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.crt.da7f9b78: {Name:mkfd9f493b2d66b665abcf4d5d88bc0c7b3f1ee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:31.555549   53243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.key.da7f9b78 ...
	I0914 23:09:31.555563   53243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.key.da7f9b78: {Name:mk2b055320197525728bfa15e8e50f7a44c5baa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:31.555651   53243 certs.go:337] copying /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.crt.da7f9b78 -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.crt
	I0914 23:09:31.555716   53243 certs.go:341] copying /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.key.da7f9b78 -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.key
	I0914 23:09:31.555773   53243 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/proxy-client.key
	I0914 23:09:31.555788   53243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/proxy-client.crt with IP's: []
	I0914 23:09:31.823405   53243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/proxy-client.crt ...
	I0914 23:09:31.823436   53243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/proxy-client.crt: {Name:mk2a61895e222aea1dfa075842d9c4abf9f4ab53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:31.823636   53243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/proxy-client.key ...
	I0914 23:09:31.823649   53243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/proxy-client.key: {Name:mk5653933e2f7481bc624b6734e43932dcf990da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:31.823827   53243 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 23:09:31.823867   53243 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 23:09:31.823878   53243 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 23:09:31.823905   53243 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 23:09:31.823936   53243 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 23:09:31.823961   53243 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 23:09:31.824004   53243 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 23:09:31.824522   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 23:09:31.852569   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 23:09:31.875891   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 23:09:31.898155   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/kindnet-104104/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 23:09:31.919781   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 23:09:31.942316   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 23:09:31.964219   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 23:09:31.986439   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 23:09:32.009004   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 23:09:32.033611   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 23:09:32.057396   53243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 23:09:32.078688   53243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 23:09:32.096270   53243 ssh_runner.go:195] Run: openssl version
	I0914 23:09:32.101918   53243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 23:09:32.113975   53243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 23:09:32.118569   53243 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 23:09:32.118628   53243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 23:09:32.124146   53243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 23:09:32.133782   53243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 23:09:32.143796   53243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:09:32.148369   53243 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:09:32.148428   53243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:09:32.153584   53243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 23:09:32.162872   53243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 23:09:32.172120   53243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 23:09:32.176354   53243 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 23:09:32.176404   53243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 23:09:32.182583   53243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 23:09:32.192369   53243 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 23:09:32.196242   53243 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 23:09:32.196297   53243 kubeadm.go:404] StartCluster: {Name:kindnet-104104 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:kindnet-104104 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:09:32.196400   53243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 23:09:32.196439   53243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 23:09:32.226825   53243 cri.go:89] found id: ""
	I0914 23:09:32.226894   53243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 23:09:32.236286   53243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:09:32.245218   53243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:09:32.254261   53243 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 23:09:32.254305   53243 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 23:09:32.432929   53243 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 23:09:35.834776   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:35.835153   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:35.835178   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:35.835118   55002 retry.go:31] will retry after 1.773868537s: waiting for machine to come up
	I0914 23:09:37.610477   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:37.610954   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:37.610987   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:37.610916   55002 retry.go:31] will retry after 3.002688914s: waiting for machine to come up
	I0914 23:09:44.643627   53243 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 23:09:44.643709   53243 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 23:09:44.643806   53243 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 23:09:44.643937   53243 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 23:09:44.644041   53243 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 23:09:44.644100   53243 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 23:09:44.645627   53243 out.go:204]   - Generating certificates and keys ...
	I0914 23:09:44.645711   53243 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 23:09:44.645790   53243 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 23:09:44.645871   53243 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 23:09:44.645935   53243 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 23:09:44.646025   53243 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 23:09:44.646087   53243 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 23:09:44.646170   53243 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 23:09:44.646359   53243 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-104104 localhost] and IPs [192.168.72.231 127.0.0.1 ::1]
	I0914 23:09:44.646436   53243 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 23:09:44.646601   53243 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-104104 localhost] and IPs [192.168.72.231 127.0.0.1 ::1]
	I0914 23:09:44.646707   53243 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 23:09:44.646790   53243 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 23:09:44.646870   53243 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 23:09:44.646933   53243 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 23:09:44.646980   53243 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 23:09:44.647042   53243 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 23:09:44.647140   53243 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 23:09:44.647227   53243 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 23:09:44.647328   53243 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 23:09:44.647418   53243 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 23:09:44.649012   53243 out.go:204]   - Booting up control plane ...
	I0914 23:09:44.649120   53243 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 23:09:44.649227   53243 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 23:09:44.649284   53243 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 23:09:44.649399   53243 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 23:09:44.649478   53243 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 23:09:44.649512   53243 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 23:09:44.649682   53243 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 23:09:44.649786   53243 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002896 seconds
	I0914 23:09:44.649922   53243 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 23:09:44.650092   53243 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 23:09:44.650180   53243 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 23:09:44.650473   53243 kubeadm.go:322] [mark-control-plane] Marking the node kindnet-104104 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 23:09:44.650543   53243 kubeadm.go:322] [bootstrap-token] Using token: otslks.boantv1jnmb2xlwj
	I0914 23:09:44.651797   53243 out.go:204]   - Configuring RBAC rules ...
	I0914 23:09:44.651881   53243 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 23:09:44.651948   53243 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 23:09:44.652076   53243 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 23:09:44.652178   53243 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 23:09:44.652270   53243 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 23:09:44.652403   53243 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 23:09:44.652566   53243 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 23:09:44.652619   53243 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 23:09:44.652678   53243 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 23:09:44.652690   53243 kubeadm.go:322] 
	I0914 23:09:44.652766   53243 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 23:09:44.652775   53243 kubeadm.go:322] 
	I0914 23:09:44.652862   53243 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 23:09:44.652873   53243 kubeadm.go:322] 
	I0914 23:09:44.652913   53243 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 23:09:44.653004   53243 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 23:09:44.653078   53243 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 23:09:44.653087   53243 kubeadm.go:322] 
	I0914 23:09:44.653165   53243 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 23:09:44.653176   53243 kubeadm.go:322] 
	I0914 23:09:44.653261   53243 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 23:09:44.653272   53243 kubeadm.go:322] 
	I0914 23:09:44.653313   53243 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 23:09:44.653422   53243 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 23:09:44.653514   53243 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 23:09:44.653521   53243 kubeadm.go:322] 
	I0914 23:09:44.653586   53243 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 23:09:44.653650   53243 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 23:09:44.653659   53243 kubeadm.go:322] 
	I0914 23:09:44.653740   53243 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token otslks.boantv1jnmb2xlwj \
	I0914 23:09:44.653894   53243 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 23:09:44.653927   53243 kubeadm.go:322] 	--control-plane 
	I0914 23:09:44.653934   53243 kubeadm.go:322] 
	I0914 23:09:44.654065   53243 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 23:09:44.654075   53243 kubeadm.go:322] 
	I0914 23:09:44.654140   53243 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token otslks.boantv1jnmb2xlwj \
	I0914 23:09:44.654234   53243 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 23:09:44.654245   53243 cni.go:84] Creating CNI manager for "kindnet"
	I0914 23:09:44.655782   53243 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 23:09:40.615596   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:40.616081   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:40.616114   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:40.616034   55002 retry.go:31] will retry after 3.948194584s: waiting for machine to come up
	I0914 23:09:44.568402   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:44.568728   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find current IP address of domain calico-104104 in network mk-calico-104104
	I0914 23:09:44.568757   53573 main.go:141] libmachine: (calico-104104) DBG | I0914 23:09:44.568676   55002 retry.go:31] will retry after 4.295532871s: waiting for machine to come up
	I0914 23:09:44.657005   53243 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 23:09:44.662980   53243 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 23:09:44.662994   53243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 23:09:44.699258   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 23:09:45.550015   53243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 23:09:45.550135   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=kindnet-104104 minikube.k8s.io/updated_at=2023_09_14T23_09_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:45.550134   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:45.644219   53243 ops.go:34] apiserver oom_adj: -16
	I0914 23:09:45.644543   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:45.771913   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:48.867124   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:48.867595   53573 main.go:141] libmachine: (calico-104104) Found IP for machine: 192.168.39.36
	I0914 23:09:48.867629   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has current primary IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:48.867639   53573 main.go:141] libmachine: (calico-104104) Reserving static IP address...
	I0914 23:09:48.867995   53573 main.go:141] libmachine: (calico-104104) DBG | unable to find host DHCP lease matching {name: "calico-104104", mac: "52:54:00:60:77:d9", ip: "192.168.39.36"} in network mk-calico-104104
	I0914 23:09:48.950108   53573 main.go:141] libmachine: (calico-104104) DBG | Getting to WaitForSSH function...
	I0914 23:09:48.950136   53573 main.go:141] libmachine: (calico-104104) Reserved static IP address: 192.168.39.36
	I0914 23:09:48.950151   53573 main.go:141] libmachine: (calico-104104) Waiting for SSH to be available...
	I0914 23:09:48.953262   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:48.953652   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:48.953701   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:48.953785   53573 main.go:141] libmachine: (calico-104104) DBG | Using SSH client type: external
	I0914 23:09:48.953815   53573 main.go:141] libmachine: (calico-104104) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/id_rsa (-rw-------)
	I0914 23:09:48.953845   53573 main.go:141] libmachine: (calico-104104) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 23:09:48.953868   53573 main.go:141] libmachine: (calico-104104) DBG | About to run SSH command:
	I0914 23:09:48.953900   53573 main.go:141] libmachine: (calico-104104) DBG | exit 0
	I0914 23:09:49.047428   53573 main.go:141] libmachine: (calico-104104) DBG | SSH cmd err, output: <nil>: 
	I0914 23:09:49.047705   53573 main.go:141] libmachine: (calico-104104) KVM machine creation complete!
	I0914 23:09:49.048021   53573 main.go:141] libmachine: (calico-104104) Calling .GetConfigRaw
	I0914 23:09:49.048565   53573 main.go:141] libmachine: (calico-104104) Calling .DriverName
	I0914 23:09:49.048769   53573 main.go:141] libmachine: (calico-104104) Calling .DriverName
	I0914 23:09:49.048921   53573 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 23:09:49.048936   53573 main.go:141] libmachine: (calico-104104) Calling .GetState
	I0914 23:09:49.050405   53573 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 23:09:49.050419   53573 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 23:09:49.050425   53573 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 23:09:49.050431   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:49.052903   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.053473   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:49.053531   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.053593   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:09:49.054465   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.054913   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.055113   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:09:49.055260   53573 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:49.055613   53573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0914 23:09:49.055628   53573 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 23:09:49.174857   53573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:09:49.174885   53573 main.go:141] libmachine: Detecting the provisioner...
	I0914 23:09:49.174897   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:49.178129   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.178376   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:49.178415   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.178567   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:09:49.178810   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.178970   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.179155   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:09:49.179317   53573 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:49.179689   53573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0914 23:09:49.179708   53573 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 23:09:49.304153   53573 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g52d8811-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0914 23:09:49.304223   53573 main.go:141] libmachine: found compatible host: buildroot
	I0914 23:09:49.304239   53573 main.go:141] libmachine: Provisioning with buildroot...
	I0914 23:09:49.304266   53573 main.go:141] libmachine: (calico-104104) Calling .GetMachineName
	I0914 23:09:49.304542   53573 buildroot.go:166] provisioning hostname "calico-104104"
	I0914 23:09:49.304565   53573 main.go:141] libmachine: (calico-104104) Calling .GetMachineName
	I0914 23:09:49.304745   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:49.307451   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.307916   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:49.307950   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.308044   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:09:49.308223   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.308394   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.308500   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:09:49.308637   53573 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:49.309003   53573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0914 23:09:49.309018   53573 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-104104 && echo "calico-104104" | sudo tee /etc/hostname
	I0914 23:09:49.449418   53573 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-104104
	
	I0914 23:09:49.449456   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:49.452618   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.452970   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:49.453007   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.453226   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:09:49.453451   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.453593   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.453756   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:09:49.453948   53573 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:49.454324   53573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0914 23:09:49.454342   53573 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-104104' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-104104/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-104104' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:09:49.584560   53573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:09:49.584587   53573 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 23:09:49.584629   53573 buildroot.go:174] setting up certificates
	I0914 23:09:49.584640   53573 provision.go:83] configureAuth start
	I0914 23:09:49.584654   53573 main.go:141] libmachine: (calico-104104) Calling .GetMachineName
	I0914 23:09:49.584956   53573 main.go:141] libmachine: (calico-104104) Calling .GetIP
	I0914 23:09:49.587321   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.587675   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:49.587731   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.587836   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:49.590182   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.590501   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:49.590521   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.590639   53573 provision.go:138] copyHostCerts
	I0914 23:09:49.590685   53573 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 23:09:49.590695   53573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 23:09:49.590760   53573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 23:09:49.590875   53573 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 23:09:49.590886   53573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 23:09:49.590921   53573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 23:09:49.591014   53573 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 23:09:49.591024   53573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 23:09:49.591053   53573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 23:09:49.591114   53573 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.calico-104104 san=[192.168.39.36 192.168.39.36 localhost 127.0.0.1 minikube calico-104104]
	I0914 23:09:49.793228   53573 provision.go:172] copyRemoteCerts
	I0914 23:09:49.793279   53573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:09:49.793300   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:49.796027   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.796249   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:49.796281   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.796456   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:09:49.796657   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.796819   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:09:49.796964   53573 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/id_rsa Username:docker}
	I0914 23:09:49.888186   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 23:09:49.912528   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 23:09:49.937967   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 23:09:49.961942   53573 provision.go:86] duration metric: configureAuth took 377.284235ms
	I0914 23:09:49.961968   53573 buildroot.go:189] setting minikube options for container-runtime
	I0914 23:09:49.962148   53573 config.go:182] Loaded profile config "calico-104104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:09:49.962236   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:49.965778   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.966163   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:49.966204   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:49.966417   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:09:49.966608   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.966759   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:49.966906   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:09:49.967056   53573 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:49.967399   53573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0914 23:09:49.967417   53573 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 23:09:50.262871   53573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 23:09:50.262903   53573 main.go:141] libmachine: Checking connection to Docker...
	I0914 23:09:50.262915   53573 main.go:141] libmachine: (calico-104104) Calling .GetURL
	I0914 23:09:50.264323   53573 main.go:141] libmachine: (calico-104104) DBG | Using libvirt version 6000000
	I0914 23:09:50.266608   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.267017   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:50.267048   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.267254   53573 main.go:141] libmachine: Docker is up and running!
	I0914 23:09:50.267271   53573 main.go:141] libmachine: Reticulating splines...
	I0914 23:09:50.267279   53573 client.go:171] LocalClient.Create took 25.14687917s
	I0914 23:09:50.267308   53573 start.go:167] duration metric: libmachine.API.Create for "calico-104104" took 25.146966904s
	I0914 23:09:50.267321   53573 start.go:300] post-start starting for "calico-104104" (driver="kvm2")
	I0914 23:09:50.267333   53573 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:09:50.267379   53573 main.go:141] libmachine: (calico-104104) Calling .DriverName
	I0914 23:09:50.267637   53573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:09:50.267669   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:50.269935   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.270313   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:50.270348   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.270471   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:09:50.270684   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:50.270879   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:09:50.271009   53573 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/id_rsa Username:docker}
	I0914 23:09:50.362089   53573 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:09:50.366280   53573 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 23:09:50.366311   53573 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 23:09:50.366380   53573 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 23:09:50.366470   53573 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 23:09:50.366585   53573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:09:50.375458   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 23:09:50.396725   53573 start.go:303] post-start completed in 129.388453ms
	I0914 23:09:50.396777   53573 main.go:141] libmachine: (calico-104104) Calling .GetConfigRaw
	I0914 23:09:50.397348   53573 main.go:141] libmachine: (calico-104104) Calling .GetIP
	I0914 23:09:50.400372   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.400718   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:50.400749   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.400992   53573 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/config.json ...
	I0914 23:09:50.401217   53573 start.go:128] duration metric: createHost completed in 25.305330351s
	I0914 23:09:50.401249   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:50.403753   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.404118   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:50.404144   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.404272   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:09:50.404439   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:50.404608   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:50.404740   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:09:50.404900   53573 main.go:141] libmachine: Using SSH client type: native
	I0914 23:09:50.405194   53573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0914 23:09:50.405211   53573 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 23:09:50.528119   54941 start.go:369] acquired machines lock for "custom-flannel-104104" in 29.071011215s
	I0914 23:09:50.528181   54941 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-104104 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-104104 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 23:09:50.528343   54941 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 23:09:46.377182   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:46.876405   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:47.376588   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:47.876286   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:48.376724   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:48.877235   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:49.376723   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:49.876521   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:50.377021   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:50.530001   54941 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 23:09:50.530609   54941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:09:50.530653   54941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:09:50.548850   54941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34283
	I0914 23:09:50.549237   54941 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:09:50.549869   54941 main.go:141] libmachine: Using API Version  1
	I0914 23:09:50.549892   54941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:09:50.550243   54941 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:09:50.550468   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetMachineName
	I0914 23:09:50.550635   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .DriverName
	I0914 23:09:50.550797   54941 start.go:159] libmachine.API.Create for "custom-flannel-104104" (driver="kvm2")
	I0914 23:09:50.550833   54941 client.go:168] LocalClient.Create starting
	I0914 23:09:50.550862   54941 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem
	I0914 23:09:50.550889   54941 main.go:141] libmachine: Decoding PEM data...
	I0914 23:09:50.550902   54941 main.go:141] libmachine: Parsing certificate...
	I0914 23:09:50.550968   54941 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem
	I0914 23:09:50.550993   54941 main.go:141] libmachine: Decoding PEM data...
	I0914 23:09:50.551008   54941 main.go:141] libmachine: Parsing certificate...
	I0914 23:09:50.551046   54941 main.go:141] libmachine: Running pre-create checks...
	I0914 23:09:50.551058   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .PreCreateCheck
	I0914 23:09:50.551377   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetConfigRaw
	I0914 23:09:50.551768   54941 main.go:141] libmachine: Creating machine...
	I0914 23:09:50.551782   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .Create
	I0914 23:09:50.551900   54941 main.go:141] libmachine: (custom-flannel-104104) Creating KVM machine...
	I0914 23:09:50.553047   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found existing default KVM network
	I0914 23:09:50.554432   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:50.554277   55233 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a1:f2:11} reservation:<nil>}
	I0914 23:09:50.555841   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:50.555741   55233 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000280a10}
	I0914 23:09:50.561273   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | trying to create private KVM network mk-custom-flannel-104104 192.168.50.0/24...
	I0914 23:09:50.638396   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | private KVM network mk-custom-flannel-104104 192.168.50.0/24 created
	I0914 23:09:50.638426   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:50.638363   55233 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 23:09:50.638444   54941 main.go:141] libmachine: (custom-flannel-104104) Setting up store path in /home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104 ...
	I0914 23:09:50.638471   54941 main.go:141] libmachine: (custom-flannel-104104) Building disk image from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso
	I0914 23:09:50.638487   54941 main.go:141] libmachine: (custom-flannel-104104) Downloading /home/jenkins/minikube-integration/17243-6287/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso...
	I0914 23:09:50.863169   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:50.863037   55233 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/id_rsa...
	I0914 23:09:51.078783   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:51.078640   55233 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/custom-flannel-104104.rawdisk...
	I0914 23:09:51.078828   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Writing magic tar header
	I0914 23:09:51.078877   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Writing SSH key tar header
	I0914 23:09:51.078929   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:51.078782   55233 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104 ...
	I0914 23:09:51.078959   54941 main.go:141] libmachine: (custom-flannel-104104) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104 (perms=drwx------)
	I0914 23:09:51.078989   54941 main.go:141] libmachine: (custom-flannel-104104) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube/machines (perms=drwxr-xr-x)
	I0914 23:09:51.079009   54941 main.go:141] libmachine: (custom-flannel-104104) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287/.minikube (perms=drwxr-xr-x)
	I0914 23:09:51.079024   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104
	I0914 23:09:51.079042   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube/machines
	I0914 23:09:51.079050   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 23:09:51.079064   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17243-6287
	I0914 23:09:51.079081   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 23:09:51.079096   54941 main.go:141] libmachine: (custom-flannel-104104) Setting executable bit set on /home/jenkins/minikube-integration/17243-6287 (perms=drwxrwxr-x)
	I0914 23:09:51.079115   54941 main.go:141] libmachine: (custom-flannel-104104) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 23:09:51.079126   54941 main.go:141] libmachine: (custom-flannel-104104) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 23:09:51.079138   54941 main.go:141] libmachine: (custom-flannel-104104) Creating domain...
	I0914 23:09:51.079146   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Checking permissions on dir: /home/jenkins
	I0914 23:09:51.079167   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Checking permissions on dir: /home
	I0914 23:09:51.079199   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Skipping /home - not owner
	I0914 23:09:51.080475   54941 main.go:141] libmachine: (custom-flannel-104104) define libvirt domain using xml: 
	I0914 23:09:51.080504   54941 main.go:141] libmachine: (custom-flannel-104104) <domain type='kvm'>
	I0914 23:09:51.080518   54941 main.go:141] libmachine: (custom-flannel-104104)   <name>custom-flannel-104104</name>
	I0914 23:09:51.080534   54941 main.go:141] libmachine: (custom-flannel-104104)   <memory unit='MiB'>3072</memory>
	I0914 23:09:51.080546   54941 main.go:141] libmachine: (custom-flannel-104104)   <vcpu>2</vcpu>
	I0914 23:09:51.080561   54941 main.go:141] libmachine: (custom-flannel-104104)   <features>
	I0914 23:09:51.080573   54941 main.go:141] libmachine: (custom-flannel-104104)     <acpi/>
	I0914 23:09:51.080587   54941 main.go:141] libmachine: (custom-flannel-104104)     <apic/>
	I0914 23:09:51.080602   54941 main.go:141] libmachine: (custom-flannel-104104)     <pae/>
	I0914 23:09:51.080614   54941 main.go:141] libmachine: (custom-flannel-104104)     
	I0914 23:09:51.080625   54941 main.go:141] libmachine: (custom-flannel-104104)   </features>
	I0914 23:09:51.080638   54941 main.go:141] libmachine: (custom-flannel-104104)   <cpu mode='host-passthrough'>
	I0914 23:09:51.080664   54941 main.go:141] libmachine: (custom-flannel-104104)   
	I0914 23:09:51.080682   54941 main.go:141] libmachine: (custom-flannel-104104)   </cpu>
	I0914 23:09:51.080702   54941 main.go:141] libmachine: (custom-flannel-104104)   <os>
	I0914 23:09:51.080716   54941 main.go:141] libmachine: (custom-flannel-104104)     <type>hvm</type>
	I0914 23:09:51.080729   54941 main.go:141] libmachine: (custom-flannel-104104)     <boot dev='cdrom'/>
	I0914 23:09:51.080738   54941 main.go:141] libmachine: (custom-flannel-104104)     <boot dev='hd'/>
	I0914 23:09:51.080753   54941 main.go:141] libmachine: (custom-flannel-104104)     <bootmenu enable='no'/>
	I0914 23:09:51.080766   54941 main.go:141] libmachine: (custom-flannel-104104)   </os>
	I0914 23:09:51.080776   54941 main.go:141] libmachine: (custom-flannel-104104)   <devices>
	I0914 23:09:51.080786   54941 main.go:141] libmachine: (custom-flannel-104104)     <disk type='file' device='cdrom'>
	I0914 23:09:51.080803   54941 main.go:141] libmachine: (custom-flannel-104104)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/boot2docker.iso'/>
	I0914 23:09:51.080819   54941 main.go:141] libmachine: (custom-flannel-104104)       <target dev='hdc' bus='scsi'/>
	I0914 23:09:51.080829   54941 main.go:141] libmachine: (custom-flannel-104104)       <readonly/>
	I0914 23:09:51.080838   54941 main.go:141] libmachine: (custom-flannel-104104)     </disk>
	I0914 23:09:51.080852   54941 main.go:141] libmachine: (custom-flannel-104104)     <disk type='file' device='disk'>
	I0914 23:09:51.080863   54941 main.go:141] libmachine: (custom-flannel-104104)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 23:09:51.080885   54941 main.go:141] libmachine: (custom-flannel-104104)       <source file='/home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/custom-flannel-104104.rawdisk'/>
	I0914 23:09:51.080898   54941 main.go:141] libmachine: (custom-flannel-104104)       <target dev='hda' bus='virtio'/>
	I0914 23:09:51.080911   54941 main.go:141] libmachine: (custom-flannel-104104)     </disk>
	I0914 23:09:51.080924   54941 main.go:141] libmachine: (custom-flannel-104104)     <interface type='network'>
	I0914 23:09:51.080956   54941 main.go:141] libmachine: (custom-flannel-104104)       <source network='mk-custom-flannel-104104'/>
	I0914 23:09:51.080980   54941 main.go:141] libmachine: (custom-flannel-104104)       <model type='virtio'/>
	I0914 23:09:51.081010   54941 main.go:141] libmachine: (custom-flannel-104104)     </interface>
	I0914 23:09:51.081024   54941 main.go:141] libmachine: (custom-flannel-104104)     <interface type='network'>
	I0914 23:09:51.081036   54941 main.go:141] libmachine: (custom-flannel-104104)       <source network='default'/>
	I0914 23:09:51.081049   54941 main.go:141] libmachine: (custom-flannel-104104)       <model type='virtio'/>
	I0914 23:09:51.081062   54941 main.go:141] libmachine: (custom-flannel-104104)     </interface>
	I0914 23:09:51.081074   54941 main.go:141] libmachine: (custom-flannel-104104)     <serial type='pty'>
	I0914 23:09:51.081085   54941 main.go:141] libmachine: (custom-flannel-104104)       <target port='0'/>
	I0914 23:09:51.081099   54941 main.go:141] libmachine: (custom-flannel-104104)     </serial>
	I0914 23:09:51.081113   54941 main.go:141] libmachine: (custom-flannel-104104)     <console type='pty'>
	I0914 23:09:51.081124   54941 main.go:141] libmachine: (custom-flannel-104104)       <target type='serial' port='0'/>
	I0914 23:09:51.081138   54941 main.go:141] libmachine: (custom-flannel-104104)     </console>
	I0914 23:09:51.081151   54941 main.go:141] libmachine: (custom-flannel-104104)     <rng model='virtio'>
	I0914 23:09:51.081166   54941 main.go:141] libmachine: (custom-flannel-104104)       <backend model='random'>/dev/random</backend>
	I0914 23:09:51.081177   54941 main.go:141] libmachine: (custom-flannel-104104)     </rng>
	I0914 23:09:51.081185   54941 main.go:141] libmachine: (custom-flannel-104104)     
	I0914 23:09:51.081197   54941 main.go:141] libmachine: (custom-flannel-104104)     
	I0914 23:09:51.081207   54941 main.go:141] libmachine: (custom-flannel-104104)   </devices>
	I0914 23:09:51.081218   54941 main.go:141] libmachine: (custom-flannel-104104) </domain>
	I0914 23:09:51.081233   54941 main.go:141] libmachine: (custom-flannel-104104) 
	I0914 23:09:51.088162   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:fb:50:ee in network default
	I0914 23:09:51.088764   54941 main.go:141] libmachine: (custom-flannel-104104) Ensuring networks are active...
	I0914 23:09:51.088794   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:51.089523   54941 main.go:141] libmachine: (custom-flannel-104104) Ensuring network default is active
	I0914 23:09:51.089916   54941 main.go:141] libmachine: (custom-flannel-104104) Ensuring network mk-custom-flannel-104104 is active
	I0914 23:09:51.090539   54941 main.go:141] libmachine: (custom-flannel-104104) Getting domain xml...
	I0914 23:09:51.091373   54941 main.go:141] libmachine: (custom-flannel-104104) Creating domain...
	I0914 23:09:50.527968   53573 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694732990.499826730
	
	I0914 23:09:50.527996   53573 fix.go:206] guest clock: 1694732990.499826730
	I0914 23:09:50.528004   53573 fix.go:219] Guest: 2023-09-14 23:09:50.49982673 +0000 UTC Remote: 2023-09-14 23:09:50.401232809 +0000 UTC m=+44.930564776 (delta=98.593921ms)
	I0914 23:09:50.528027   53573 fix.go:190] guest clock delta is within tolerance: 98.593921ms
	I0914 23:09:50.528035   53573 start.go:83] releasing machines lock for "calico-104104", held for 25.432340433s
	I0914 23:09:50.528065   53573 main.go:141] libmachine: (calico-104104) Calling .DriverName
	I0914 23:09:50.528350   53573 main.go:141] libmachine: (calico-104104) Calling .GetIP
	I0914 23:09:50.532652   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.533055   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:50.533091   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.533273   53573 main.go:141] libmachine: (calico-104104) Calling .DriverName
	I0914 23:09:50.535824   53573 main.go:141] libmachine: (calico-104104) Calling .DriverName
	I0914 23:09:50.536043   53573 main.go:141] libmachine: (calico-104104) Calling .DriverName
	I0914 23:09:50.536102   53573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:09:50.536158   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:50.536296   53573 ssh_runner.go:195] Run: cat /version.json
	I0914 23:09:50.536321   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:09:50.539194   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.539541   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:50.539571   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.539590   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.539747   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:09:50.539918   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:50.540034   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:50.540057   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:09:50.540058   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:50.540177   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:09:50.540318   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:09:50.540333   53573 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/id_rsa Username:docker}
	I0914 23:09:50.540541   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:09:50.540687   53573 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/id_rsa Username:docker}
	I0914 23:09:50.669091   53573 ssh_runner.go:195] Run: systemctl --version
	I0914 23:09:50.675397   53573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 23:09:50.832678   53573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 23:09:50.838655   53573 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 23:09:50.838729   53573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:09:50.853204   53573 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 23:09:50.853233   53573 start.go:469] detecting cgroup driver to use...
	I0914 23:09:50.853281   53573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:09:50.866633   53573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:09:50.877647   53573 docker.go:196] disabling cri-docker service (if available) ...
	I0914 23:09:50.877692   53573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 23:09:50.889901   53573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 23:09:50.903659   53573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 23:09:51.047548   53573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 23:09:51.172172   53573 docker.go:212] disabling docker service ...
	I0914 23:09:51.172237   53573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 23:09:51.186794   53573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 23:09:51.197709   53573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 23:09:51.315064   53573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 23:09:51.423151   53573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 23:09:51.436266   53573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:09:51.455737   53573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 23:09:51.455797   53573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:09:51.466075   53573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 23:09:51.466139   53573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:09:51.476003   53573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:09:51.486766   53573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:09:51.497228   53573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 23:09:51.506002   53573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 23:09:51.513347   53573 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 23:09:51.513403   53573 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 23:09:51.524988   53573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 23:09:51.532986   53573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:09:51.634334   53573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 23:09:51.799587   53573 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 23:09:51.799659   53573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 23:09:51.807894   53573 start.go:537] Will wait 60s for crictl version
	I0914 23:09:51.807971   53573 ssh_runner.go:195] Run: which crictl
	I0914 23:09:51.811572   53573 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 23:09:51.848781   53573 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 23:09:51.848858   53573 ssh_runner.go:195] Run: crio --version
	I0914 23:09:51.894196   53573 ssh_runner.go:195] Run: crio --version
	I0914 23:09:51.954679   53573 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 23:09:51.955988   53573 main.go:141] libmachine: (calico-104104) Calling .GetIP
	I0914 23:09:51.959246   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:51.959722   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:09:51.959757   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:09:51.960048   53573 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 23:09:51.963819   53573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 23:09:51.976891   53573 localpath.go:92] copying /home/jenkins/minikube-integration/17243-6287/.minikube/client.crt -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/client.crt
	I0914 23:09:51.977046   53573 localpath.go:117] copying /home/jenkins/minikube-integration/17243-6287/.minikube/client.key -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/client.key
	I0914 23:09:51.977164   53573 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 23:09:51.977220   53573 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 23:09:52.006575   53573 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 23:09:52.006664   53573 ssh_runner.go:195] Run: which lz4
	I0914 23:09:52.010342   53573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 23:09:52.014244   53573 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 23:09:52.014275   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 23:09:53.763385   53573 crio.go:444] Took 1.753067 seconds to copy over tarball
	I0914 23:09:53.763491   53573 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 23:09:50.876745   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:51.376694   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:51.876276   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:52.376668   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:52.877189   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:53.376314   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:53.876696   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:54.376250   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:54.876346   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:55.376685   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:52.558818   54941 main.go:141] libmachine: (custom-flannel-104104) Waiting to get IP...
	I0914 23:09:52.560014   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:52.560482   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:09:52.560518   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:52.560437   55233 retry.go:31] will retry after 262.460471ms: waiting for machine to come up
	I0914 23:09:52.825446   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:52.826098   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:09:52.826122   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:52.826014   55233 retry.go:31] will retry after 345.529483ms: waiting for machine to come up
	I0914 23:09:53.173496   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:53.174113   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:09:53.174137   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:53.174027   55233 retry.go:31] will retry after 381.635659ms: waiting for machine to come up
	I0914 23:09:53.557714   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:53.558310   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:09:53.558341   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:53.558282   55233 retry.go:31] will retry after 532.859676ms: waiting for machine to come up
	I0914 23:09:54.093371   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:54.093847   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:09:54.093884   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:54.093808   55233 retry.go:31] will retry after 614.289285ms: waiting for machine to come up
	I0914 23:09:54.710167   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:54.710751   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:09:54.710789   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:54.710707   55233 retry.go:31] will retry after 712.458523ms: waiting for machine to come up
	I0914 23:09:55.424911   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:55.425410   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:09:55.425437   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:55.425344   55233 retry.go:31] will retry after 907.66049ms: waiting for machine to come up
	I0914 23:09:56.334604   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:56.335262   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:09:56.335297   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:56.335149   55233 retry.go:31] will retry after 1.221302561s: waiting for machine to come up
	I0914 23:09:55.877151   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:56.630883   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:57.164718   53243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:09:58.078861   53243 kubeadm.go:1081] duration metric: took 12.528787602s to wait for elevateKubeSystemPrivileges.
	I0914 23:09:58.078898   53243 kubeadm.go:406] StartCluster complete in 25.882606054s
	I0914 23:09:58.078919   53243 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:58.079000   53243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 23:09:58.080919   53243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:58.100353   53243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 23:09:58.100672   53243 config.go:182] Loaded profile config "kindnet-104104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:09:58.100800   53243 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 23:09:58.100873   53243 addons.go:69] Setting storage-provisioner=true in profile "kindnet-104104"
	I0914 23:09:58.100890   53243 addons.go:231] Setting addon storage-provisioner=true in "kindnet-104104"
	I0914 23:09:58.100944   53243 host.go:66] Checking if "kindnet-104104" exists ...
	I0914 23:09:58.101404   53243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:09:58.101433   53243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:09:58.101582   53243 addons.go:69] Setting default-storageclass=true in profile "kindnet-104104"
	I0914 23:09:58.101603   53243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-104104"
	I0914 23:09:58.102033   53243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:09:58.102058   53243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:09:58.120918   53243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0914 23:09:58.121002   53243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43927
	I0914 23:09:58.121385   53243 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:09:58.121523   53243 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:09:58.121862   53243 main.go:141] libmachine: Using API Version  1
	I0914 23:09:58.121890   53243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:09:58.122293   53243 main.go:141] libmachine: Using API Version  1
	I0914 23:09:58.122302   53243 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:09:58.122313   53243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:09:58.122657   53243 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:09:58.122824   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetState
	I0914 23:09:58.122853   53243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:09:58.122890   53243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:09:58.140201   53243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40913
	I0914 23:09:58.140636   53243 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:09:58.141110   53243 main.go:141] libmachine: Using API Version  1
	I0914 23:09:58.141134   53243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:09:58.141790   53243 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:09:58.141990   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetState
	I0914 23:09:58.143619   53243 main.go:141] libmachine: (kindnet-104104) Calling .DriverName
	I0914 23:09:58.283162   53243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:09:56.892460   53573 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.128927393s)
	I0914 23:09:56.892495   53573 crio.go:451] Took 3.129083 seconds to extract the tarball
	I0914 23:09:56.892510   53573 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 23:09:56.934423   53573 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 23:09:56.990635   53573 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 23:09:56.990660   53573 cache_images.go:84] Images are preloaded, skipping loading
	I0914 23:09:56.990772   53573 ssh_runner.go:195] Run: crio config
	I0914 23:09:57.052546   53573 cni.go:84] Creating CNI manager for "calico"
	I0914 23:09:57.052576   53573 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 23:09:57.052606   53573 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-104104 NodeName:calico-104104 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 23:09:57.052782   53573 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-104104"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 23:09:57.052846   53573 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=calico-104104 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:calico-104104 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0914 23:09:57.052896   53573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 23:09:57.061626   53573 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 23:09:57.061741   53573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 23:09:57.070411   53573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0914 23:09:57.086463   53573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 23:09:57.102974   53573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0914 23:09:57.118858   53573 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0914 23:09:57.122530   53573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 23:09:57.135436   53573 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104 for IP: 192.168.39.36
	I0914 23:09:57.135482   53573 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:57.135667   53573 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 23:09:57.135731   53573 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 23:09:57.135824   53573 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/client.key
	I0914 23:09:57.135852   53573 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.key.35089287
	I0914 23:09:57.135878   53573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.crt.35089287 with IP's: [192.168.39.36 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 23:09:57.381983   53573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.crt.35089287 ...
	I0914 23:09:57.382017   53573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.crt.35089287: {Name:mk10aed4081b6516962a659d907ca33990cce716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:57.382221   53573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.key.35089287 ...
	I0914 23:09:57.382237   53573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.key.35089287: {Name:mkf908f42b18addd063ddce36faa6c78b67ad922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:57.382358   53573 certs.go:337] copying /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.crt.35089287 -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.crt
	I0914 23:09:57.382452   53573 certs.go:341] copying /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.key.35089287 -> /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.key
	I0914 23:09:57.382539   53573 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/proxy-client.key
	I0914 23:09:57.382559   53573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/proxy-client.crt with IP's: []
	I0914 23:09:57.782548   53573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/proxy-client.crt ...
	I0914 23:09:57.782577   53573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/proxy-client.crt: {Name:mk49924d5392063d3984dbb52531e75544c8c260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:57.782731   53573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/proxy-client.key ...
	I0914 23:09:57.782742   53573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/proxy-client.key: {Name:mk8d361a5e5fba0c8f3d1cb731ef71ed19a182f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:09:57.782895   53573 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 23:09:57.782930   53573 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 23:09:57.782937   53573 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 23:09:57.782965   53573 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 23:09:57.782999   53573 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 23:09:57.783023   53573 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 23:09:57.783062   53573 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 23:09:57.783654   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 23:09:57.806796   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 23:09:57.836207   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 23:09:57.860420   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/calico-104104/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 23:09:57.906027   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 23:09:57.928608   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 23:09:57.951483   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 23:09:57.975174   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 23:09:58.001206   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 23:09:58.024736   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 23:09:58.046663   53573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 23:09:58.072048   53573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 23:09:58.089971   53573 ssh_runner.go:195] Run: openssl version
	I0914 23:09:58.095418   53573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 23:09:58.107071   53573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:09:58.113881   53573 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:09:58.113942   53573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:09:58.120245   53573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 23:09:58.134197   53573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 23:09:58.147712   53573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 23:09:58.153575   53573 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 23:09:58.153636   53573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 23:09:58.160571   53573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 23:09:58.173929   53573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 23:09:58.186950   53573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 23:09:58.192978   53573 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 23:09:58.193031   53573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 23:09:58.200553   53573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 23:09:58.213453   53573 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 23:09:58.218375   53573 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 23:09:58.218433   53573 kubeadm.go:404] StartCluster: {Name:calico-104104 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:calico-104104 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:09:58.218519   53573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 23:09:58.218590   53573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 23:09:58.258635   53573 cri.go:89] found id: ""
	I0914 23:09:58.258699   53573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 23:09:58.269411   53573 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:09:58.279758   53573 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:09:58.290027   53573 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 23:09:58.290082   53573 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 23:09:58.340194   53573 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 23:09:58.340272   53573 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 23:09:58.477339   53573 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 23:09:58.477474   53573 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 23:09:58.477601   53573 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 23:09:58.672426   53573 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 23:09:58.343799   53243 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:09:58.343823   53243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 23:09:58.343850   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:58.347342   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:58.347808   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:58.347858   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:58.348090   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:58.348305   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:58.348481   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:58.348607   53243 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/kindnet-104104/id_rsa Username:docker}
	I0914 23:09:58.450836   53243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:09:58.570512   53243 addons.go:231] Setting addon default-storageclass=true in "kindnet-104104"
	I0914 23:09:58.570570   53243 host.go:66] Checking if "kindnet-104104" exists ...
	I0914 23:09:58.570983   53243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:09:58.571026   53243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:09:58.577941   53243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 23:09:58.591991   53243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0914 23:09:58.592450   53243 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:09:58.592971   53243 main.go:141] libmachine: Using API Version  1
	I0914 23:09:58.592998   53243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:09:58.593354   53243 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:09:58.593992   53243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:09:58.594025   53243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:09:58.613187   53243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I0914 23:09:58.613863   53243 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:09:58.614567   53243 main.go:141] libmachine: Using API Version  1
	I0914 23:09:58.614590   53243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:09:58.614970   53243 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:09:58.615204   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetState
	I0914 23:09:58.617223   53243 main.go:141] libmachine: (kindnet-104104) Calling .DriverName
	I0914 23:09:58.617485   53243 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 23:09:58.617500   53243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 23:09:58.617519   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHHostname
	I0914 23:09:58.620899   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:58.621301   53243 main.go:141] libmachine: (kindnet-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:42:8b", ip: ""} in network mk-kindnet-104104: {Iface:virbr4 ExpiryTime:2023-09-15 00:09:16 +0000 UTC Type:0 Mac:52:54:00:2c:42:8b Iaid: IPaddr:192.168.72.231 Prefix:24 Hostname:kindnet-104104 Clientid:01:52:54:00:2c:42:8b}
	I0914 23:09:58.621334   53243 main.go:141] libmachine: (kindnet-104104) DBG | domain kindnet-104104 has defined IP address 192.168.72.231 and MAC address 52:54:00:2c:42:8b in network mk-kindnet-104104
	I0914 23:09:58.621460   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHPort
	I0914 23:09:58.621600   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHKeyPath
	I0914 23:09:58.621713   53243 main.go:141] libmachine: (kindnet-104104) Calling .GetSSHUsername
	I0914 23:09:58.621818   53243 sshutil.go:53] new ssh client: &{IP:192.168.72.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/kindnet-104104/id_rsa Username:docker}
	I0914 23:09:58.758340   53243 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-104104" context rescaled to 1 replicas
	I0914 23:09:58.758379   53243 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.72.231 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 23:09:58.760025   53243 out.go:177] * Verifying Kubernetes components...
	I0914 23:09:58.761729   53243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:09:58.855519   53243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 23:09:59.601539   53243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.150663043s)
	I0914 23:09:59.601643   53243 main.go:141] libmachine: Making call to close driver server
	I0914 23:09:59.601665   53243 main.go:141] libmachine: (kindnet-104104) Calling .Close
	I0914 23:09:59.601683   53243 main.go:141] libmachine: Making call to close driver server
	I0914 23:09:59.601705   53243 main.go:141] libmachine: (kindnet-104104) Calling .Close
	I0914 23:09:59.601601   53243 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.023628367s)
	I0914 23:09:59.601779   53243 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0914 23:09:59.601946   53243 main.go:141] libmachine: (kindnet-104104) DBG | Closing plugin on server side
	I0914 23:09:59.601967   53243 main.go:141] libmachine: Successfully made call to close driver server
	I0914 23:09:59.601979   53243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 23:09:59.601989   53243 main.go:141] libmachine: Making call to close driver server
	I0914 23:09:59.601999   53243 main.go:141] libmachine: (kindnet-104104) Calling .Close
	I0914 23:09:59.602161   53243 main.go:141] libmachine: (kindnet-104104) DBG | Closing plugin on server side
	I0914 23:09:59.602199   53243 main.go:141] libmachine: Successfully made call to close driver server
	I0914 23:09:59.602207   53243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 23:09:59.602216   53243 main.go:141] libmachine: Making call to close driver server
	I0914 23:09:59.602224   53243 main.go:141] libmachine: (kindnet-104104) Calling .Close
	I0914 23:09:59.602331   53243 main.go:141] libmachine: (kindnet-104104) DBG | Closing plugin on server side
	I0914 23:09:59.602371   53243 main.go:141] libmachine: Successfully made call to close driver server
	I0914 23:09:59.602380   53243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 23:09:59.602657   53243 main.go:141] libmachine: (kindnet-104104) DBG | Closing plugin on server side
	I0914 23:09:59.602719   53243 main.go:141] libmachine: Successfully made call to close driver server
	I0914 23:09:59.602748   53243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 23:09:59.602762   53243 main.go:141] libmachine: Making call to close driver server
	I0914 23:09:59.602777   53243 main.go:141] libmachine: (kindnet-104104) Calling .Close
	I0914 23:09:59.603110   53243 node_ready.go:35] waiting up to 15m0s for node "kindnet-104104" to be "Ready" ...
	I0914 23:09:59.603501   53243 main.go:141] libmachine: (kindnet-104104) DBG | Closing plugin on server side
	I0914 23:09:59.603545   53243 main.go:141] libmachine: Successfully made call to close driver server
	I0914 23:09:59.603560   53243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 23:09:59.605627   53243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 23:09:58.675106   53573 out.go:204]   - Generating certificates and keys ...
	I0914 23:09:58.675213   53573 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 23:09:58.675303   53573 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 23:09:58.865419   53573 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 23:09:59.160926   53573 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 23:09:59.667063   53573 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 23:09:59.968680   53573 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 23:10:00.102353   53573 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 23:10:00.102485   53573 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [calico-104104 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0914 23:09:59.607172   53243 addons.go:502] enable addons completed in 1.506358109s: enabled=[storage-provisioner default-storageclass]
	I0914 23:09:57.558431   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:57.558917   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:09:57.558953   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:57.558829   55233 retry.go:31] will retry after 1.691491288s: waiting for machine to come up
	I0914 23:09:59.252693   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:09:59.253226   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:09:59.253253   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:09:59.253172   55233 retry.go:31] will retry after 2.017386226s: waiting for machine to come up
	I0914 23:10:01.272261   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:01.272687   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:10:01.272720   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:10:01.272633   55233 retry.go:31] will retry after 2.503343845s: waiting for machine to come up
	I0914 23:10:00.594583   53573 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 23:10:00.595005   53573 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [calico-104104 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0914 23:10:00.817084   53573 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 23:10:00.972138   53573 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 23:10:01.094942   53573 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 23:10:01.095301   53573 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 23:10:01.320610   53573 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 23:10:01.463458   53573 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 23:10:01.624311   53573 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 23:10:01.742522   53573 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 23:10:01.743345   53573 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 23:10:01.746887   53573 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 23:10:01.748738   53573 out.go:204]   - Booting up control plane ...
	I0914 23:10:01.748887   53573 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 23:10:01.748992   53573 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 23:10:01.749869   53573 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 23:10:01.766922   53573 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 23:10:01.767061   53573 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 23:10:01.767126   53573 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 23:10:01.910251   53573 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 23:10:01.616843   53243 node_ready.go:58] node "kindnet-104104" has status "Ready":"False"
	I0914 23:10:03.617069   53243 node_ready.go:58] node "kindnet-104104" has status "Ready":"False"
	I0914 23:10:03.777739   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:03.778334   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:10:03.778364   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:10:03.778283   55233 retry.go:31] will retry after 3.59349992s: waiting for machine to come up
	I0914 23:10:09.908225   53573 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002245 seconds
	I0914 23:10:09.908355   53573 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 23:10:09.933343   53573 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 23:10:10.470572   53573 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 23:10:10.470745   53573 kubeadm.go:322] [mark-control-plane] Marking the node calico-104104 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 23:10:06.117349   53243 node_ready.go:58] node "kindnet-104104" has status "Ready":"False"
	I0914 23:10:08.617582   53243 node_ready.go:49] node "kindnet-104104" has status "Ready":"True"
	I0914 23:10:08.617608   53243 node_ready.go:38] duration metric: took 9.01447145s waiting for node "kindnet-104104" to be "Ready" ...
	I0914 23:10:08.617618   53243 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 23:10:08.625248   53243 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-dpqcx" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:10.646144   53243 pod_ready.go:102] pod "coredns-5dd5756b68-dpqcx" in "kube-system" namespace has status "Ready":"False"
	I0914 23:10:10.986076   53573 kubeadm.go:322] [bootstrap-token] Using token: n6wy8v.r26kebwls8inl04p
	I0914 23:10:07.373475   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:07.373983   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:10:07.374029   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:10:07.373955   55233 retry.go:31] will retry after 3.22295231s: waiting for machine to come up
	I0914 23:10:10.598010   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:10.598475   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find current IP address of domain custom-flannel-104104 in network mk-custom-flannel-104104
	I0914 23:10:10.598488   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | I0914 23:10:10.598456   55233 retry.go:31] will retry after 5.254971968s: waiting for machine to come up
	I0914 23:10:10.988040   53573 out.go:204]   - Configuring RBAC rules ...
	I0914 23:10:10.988168   53573 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 23:10:10.992491   53573 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 23:10:11.000258   53573 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 23:10:11.003630   53573 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 23:10:11.010486   53573 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 23:10:11.014795   53573 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 23:10:11.030040   53573 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 23:10:11.278596   53573 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 23:10:11.404028   53573 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 23:10:11.404052   53573 kubeadm.go:322] 
	I0914 23:10:11.404132   53573 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 23:10:11.404144   53573 kubeadm.go:322] 
	I0914 23:10:11.404245   53573 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 23:10:11.404255   53573 kubeadm.go:322] 
	I0914 23:10:11.404286   53573 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 23:10:11.404401   53573 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 23:10:11.404481   53573 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 23:10:11.404491   53573 kubeadm.go:322] 
	I0914 23:10:11.404552   53573 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 23:10:11.404575   53573 kubeadm.go:322] 
	I0914 23:10:11.404647   53573 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 23:10:11.404658   53573 kubeadm.go:322] 
	I0914 23:10:11.404744   53573 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 23:10:11.404833   53573 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 23:10:11.404921   53573 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 23:10:11.404930   53573 kubeadm.go:322] 
	I0914 23:10:11.405057   53573 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 23:10:11.405168   53573 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 23:10:11.405183   53573 kubeadm.go:322] 
	I0914 23:10:11.405282   53573 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token n6wy8v.r26kebwls8inl04p \
	I0914 23:10:11.405381   53573 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 23:10:11.405414   53573 kubeadm.go:322] 	--control-plane 
	I0914 23:10:11.405429   53573 kubeadm.go:322] 
	I0914 23:10:11.405506   53573 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 23:10:11.405513   53573 kubeadm.go:322] 
	I0914 23:10:11.405600   53573 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token n6wy8v.r26kebwls8inl04p \
	I0914 23:10:11.405752   53573 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 23:10:11.405915   53573 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 23:10:11.405939   53573 cni.go:84] Creating CNI manager for "calico"
	I0914 23:10:11.407679   53573 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0914 23:10:11.146662   53243 pod_ready.go:92] pod "coredns-5dd5756b68-dpqcx" in "kube-system" namespace has status "Ready":"True"
	I0914 23:10:11.146688   53243 pod_ready.go:81] duration metric: took 2.521410419s waiting for pod "coredns-5dd5756b68-dpqcx" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.146701   53243 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-104104" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.153456   53243 pod_ready.go:92] pod "etcd-kindnet-104104" in "kube-system" namespace has status "Ready":"True"
	I0914 23:10:11.153479   53243 pod_ready.go:81] duration metric: took 6.770076ms waiting for pod "etcd-kindnet-104104" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.153493   53243 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-104104" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.159122   53243 pod_ready.go:92] pod "kube-apiserver-kindnet-104104" in "kube-system" namespace has status "Ready":"True"
	I0914 23:10:11.159147   53243 pod_ready.go:81] duration metric: took 5.644232ms waiting for pod "kube-apiserver-kindnet-104104" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.159159   53243 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-104104" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.164108   53243 pod_ready.go:92] pod "kube-controller-manager-kindnet-104104" in "kube-system" namespace has status "Ready":"True"
	I0914 23:10:11.164129   53243 pod_ready.go:81] duration metric: took 4.960612ms waiting for pod "kube-controller-manager-kindnet-104104" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.164141   53243 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-6k2nd" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.417006   53243 pod_ready.go:92] pod "kube-proxy-6k2nd" in "kube-system" namespace has status "Ready":"True"
	I0914 23:10:11.417031   53243 pod_ready.go:81] duration metric: took 252.880838ms waiting for pod "kube-proxy-6k2nd" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.417043   53243 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-104104" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.818449   53243 pod_ready.go:92] pod "kube-scheduler-kindnet-104104" in "kube-system" namespace has status "Ready":"True"
	I0914 23:10:11.818467   53243 pod_ready.go:81] duration metric: took 401.416722ms waiting for pod "kube-scheduler-kindnet-104104" in "kube-system" namespace to be "Ready" ...
	I0914 23:10:11.818478   53243 pod_ready.go:38] duration metric: took 3.200843946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 23:10:11.818492   53243 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:10:11.818549   53243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:10:11.839136   53243 api_server.go:72] duration metric: took 13.080695486s to wait for apiserver process to appear ...
	I0914 23:10:11.839164   53243 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:10:11.839184   53243 api_server.go:253] Checking apiserver healthz at https://192.168.72.231:8443/healthz ...
	I0914 23:10:11.851741   53243 api_server.go:279] https://192.168.72.231:8443/healthz returned 200:
	ok
	I0914 23:10:11.853434   53243 api_server.go:141] control plane version: v1.28.1
	I0914 23:10:11.853454   53243 api_server.go:131] duration metric: took 14.283363ms to wait for apiserver health ...
	I0914 23:10:11.853461   53243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 23:10:12.020665   53243 system_pods.go:59] 8 kube-system pods found
	I0914 23:10:12.020694   53243 system_pods.go:61] "coredns-5dd5756b68-dpqcx" [3ca8ca15-f392-4d1c-bcc5-417b441aa073] Running
	I0914 23:10:12.020699   53243 system_pods.go:61] "etcd-kindnet-104104" [16d7a4a0-76ad-486c-a879-c5acd0d2cd83] Running
	I0914 23:10:12.020703   53243 system_pods.go:61] "kindnet-49m7n" [043dc0a8-431b-4569-9fea-a034ddaba4fd] Running
	I0914 23:10:12.020707   53243 system_pods.go:61] "kube-apiserver-kindnet-104104" [72e5ab45-ab96-4730-b07d-37a1df989792] Running
	I0914 23:10:12.020711   53243 system_pods.go:61] "kube-controller-manager-kindnet-104104" [0dc08c00-b94b-4b8c-adb7-d9c475aee052] Running
	I0914 23:10:12.020717   53243 system_pods.go:61] "kube-proxy-6k2nd" [d6761854-c5b7-4307-a68b-5157dd4a777c] Running
	I0914 23:10:12.020722   53243 system_pods.go:61] "kube-scheduler-kindnet-104104" [60d1e6ba-729a-42c8-a1cb-742d19a1eee5] Running
	I0914 23:10:12.020726   53243 system_pods.go:61] "storage-provisioner" [81004849-29cf-441b-be5d-7e20dd0a0295] Running
	I0914 23:10:12.020731   53243 system_pods.go:74] duration metric: took 167.265839ms to wait for pod list to return data ...
	I0914 23:10:12.020740   53243 default_sa.go:34] waiting for default service account to be created ...
	I0914 23:10:12.216487   53243 default_sa.go:45] found service account: "default"
	I0914 23:10:12.216519   53243 default_sa.go:55] duration metric: took 195.772469ms for default service account to be created ...
	I0914 23:10:12.216529   53243 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 23:10:12.419618   53243 system_pods.go:86] 8 kube-system pods found
	I0914 23:10:12.419643   53243 system_pods.go:89] "coredns-5dd5756b68-dpqcx" [3ca8ca15-f392-4d1c-bcc5-417b441aa073] Running
	I0914 23:10:12.419649   53243 system_pods.go:89] "etcd-kindnet-104104" [16d7a4a0-76ad-486c-a879-c5acd0d2cd83] Running
	I0914 23:10:12.419653   53243 system_pods.go:89] "kindnet-49m7n" [043dc0a8-431b-4569-9fea-a034ddaba4fd] Running
	I0914 23:10:12.419658   53243 system_pods.go:89] "kube-apiserver-kindnet-104104" [72e5ab45-ab96-4730-b07d-37a1df989792] Running
	I0914 23:10:12.419662   53243 system_pods.go:89] "kube-controller-manager-kindnet-104104" [0dc08c00-b94b-4b8c-adb7-d9c475aee052] Running
	I0914 23:10:12.419665   53243 system_pods.go:89] "kube-proxy-6k2nd" [d6761854-c5b7-4307-a68b-5157dd4a777c] Running
	I0914 23:10:12.419669   53243 system_pods.go:89] "kube-scheduler-kindnet-104104" [60d1e6ba-729a-42c8-a1cb-742d19a1eee5] Running
	I0914 23:10:12.419676   53243 system_pods.go:89] "storage-provisioner" [81004849-29cf-441b-be5d-7e20dd0a0295] Running
	I0914 23:10:12.419684   53243 system_pods.go:126] duration metric: took 203.148254ms to wait for k8s-apps to be running ...
	I0914 23:10:12.419702   53243 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 23:10:12.419756   53243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:10:12.433451   53243 system_svc.go:56] duration metric: took 13.740697ms WaitForService to wait for kubelet.
	I0914 23:10:12.433474   53243 kubeadm.go:581] duration metric: took 13.675040599s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 23:10:12.433494   53243 node_conditions.go:102] verifying NodePressure condition ...
	I0914 23:10:12.616316   53243 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 23:10:12.616345   53243 node_conditions.go:123] node cpu capacity is 2
	I0914 23:10:12.616357   53243 node_conditions.go:105] duration metric: took 182.857972ms to run NodePressure ...
	I0914 23:10:12.616367   53243 start.go:228] waiting for startup goroutines ...
	I0914 23:10:12.616375   53243 start.go:233] waiting for cluster config update ...
	I0914 23:10:12.616383   53243 start.go:242] writing updated cluster config ...
	I0914 23:10:12.616585   53243 ssh_runner.go:195] Run: rm -f paused
	I0914 23:10:12.670369   53243 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 23:10:12.673624   53243 out.go:177] * Done! kubectl is now configured to use "kindnet-104104" cluster and "default" namespace by default
	I0914 23:10:11.409324   53573 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 23:10:11.409341   53573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (244810 bytes)
	I0914 23:10:11.428750   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 23:10:13.430092   53573 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.001292815s)
	I0914 23:10:13.430145   53573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 23:10:13.430235   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:13.430235   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=calico-104104 minikube.k8s.io/updated_at=2023_09_14T23_10_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:13.446098   53573 ops.go:34] apiserver oom_adj: -16
	I0914 23:10:13.541729   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:13.641298   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:14.224392   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:14.724444   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:15.224422   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:15.854728   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:15.855211   54941 main.go:141] libmachine: (custom-flannel-104104) Found IP for machine: 192.168.50.104
	I0914 23:10:15.855264   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has current primary IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:15.855280   54941 main.go:141] libmachine: (custom-flannel-104104) Reserving static IP address...
	I0914 23:10:15.855639   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find host DHCP lease matching {name: "custom-flannel-104104", mac: "52:54:00:5c:6d:a5", ip: "192.168.50.104"} in network mk-custom-flannel-104104
	I0914 23:10:15.933041   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Getting to WaitForSSH function...
	I0914 23:10:15.933071   54941 main.go:141] libmachine: (custom-flannel-104104) Reserved static IP address: 192.168.50.104
	I0914 23:10:15.933087   54941 main.go:141] libmachine: (custom-flannel-104104) Waiting for SSH to be available...
	I0914 23:10:15.936114   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:15.936516   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104
	I0914 23:10:15.936549   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | unable to find defined IP address of network mk-custom-flannel-104104 interface with MAC address 52:54:00:5c:6d:a5
	I0914 23:10:15.936746   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Using SSH client type: external
	I0914 23:10:15.936783   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/id_rsa (-rw-------)
	I0914 23:10:15.936836   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 23:10:15.936871   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | About to run SSH command:
	I0914 23:10:15.936889   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | exit 0
	I0914 23:10:15.940414   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | SSH cmd err, output: exit status 255: 
	I0914 23:10:15.940430   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0914 23:10:15.940437   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | command : exit 0
	I0914 23:10:15.940448   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | err     : exit status 255
	I0914 23:10:15.940456   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | output  : 
	I0914 23:10:15.724319   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:16.224431   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:16.724724   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:17.224084   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:17.723718   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:18.224134   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:18.724007   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:19.223783   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:19.724409   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:20.223959   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:18.942582   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Getting to WaitForSSH function...
	I0914 23:10:18.945001   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:18.945469   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:18.945508   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:18.945746   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Using SSH client type: external
	I0914 23:10:18.945766   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/id_rsa (-rw-------)
	I0914 23:10:18.945818   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 23:10:18.945846   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | About to run SSH command:
	I0914 23:10:18.945873   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | exit 0
	I0914 23:10:19.043578   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | SSH cmd err, output: <nil>: 
	I0914 23:10:19.043883   54941 main.go:141] libmachine: (custom-flannel-104104) KVM machine creation complete!
	I0914 23:10:19.044224   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetConfigRaw
	I0914 23:10:19.044855   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .DriverName
	I0914 23:10:19.045090   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .DriverName
	I0914 23:10:19.045276   54941 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 23:10:19.045291   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetState
	I0914 23:10:19.046657   54941 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 23:10:19.046671   54941 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 23:10:19.046677   54941 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 23:10:19.046684   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:19.049461   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.049849   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:19.049880   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.050061   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHPort
	I0914 23:10:19.050249   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.050417   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.050573   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHUsername
	I0914 23:10:19.050753   54941 main.go:141] libmachine: Using SSH client type: native
	I0914 23:10:19.051104   54941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0914 23:10:19.051119   54941 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 23:10:19.174507   54941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:10:19.174533   54941 main.go:141] libmachine: Detecting the provisioner...
	I0914 23:10:19.174545   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:19.177603   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.177937   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:19.177962   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.178128   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHPort
	I0914 23:10:19.178317   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.178486   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.178647   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHUsername
	I0914 23:10:19.178847   54941 main.go:141] libmachine: Using SSH client type: native
	I0914 23:10:19.179291   54941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0914 23:10:19.179309   54941 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 23:10:19.308370   54941 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g52d8811-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0914 23:10:19.308443   54941 main.go:141] libmachine: found compatible host: buildroot
	I0914 23:10:19.308459   54941 main.go:141] libmachine: Provisioning with buildroot...
	I0914 23:10:19.308475   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetMachineName
	I0914 23:10:19.308730   54941 buildroot.go:166] provisioning hostname "custom-flannel-104104"
	I0914 23:10:19.308757   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetMachineName
	I0914 23:10:19.309018   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:19.311692   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.312107   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:19.312161   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.312320   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHPort
	I0914 23:10:19.312505   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.312665   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.312805   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHUsername
	I0914 23:10:19.312944   54941 main.go:141] libmachine: Using SSH client type: native
	I0914 23:10:19.313364   54941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0914 23:10:19.313390   54941 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-104104 && echo "custom-flannel-104104" | sudo tee /etc/hostname
	I0914 23:10:19.451814   54941 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-104104
	
	I0914 23:10:19.451847   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:19.454533   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.454927   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:19.454963   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.455112   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHPort
	I0914 23:10:19.455302   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.455485   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.455694   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHUsername
	I0914 23:10:19.455876   54941 main.go:141] libmachine: Using SSH client type: native
	I0914 23:10:19.456349   54941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0914 23:10:19.456383   54941 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-104104' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-104104/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-104104' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:10:19.587664   54941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:10:19.587699   54941 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 23:10:19.587730   54941 buildroot.go:174] setting up certificates
	I0914 23:10:19.587741   54941 provision.go:83] configureAuth start
	I0914 23:10:19.587757   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetMachineName
	I0914 23:10:19.588021   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetIP
	I0914 23:10:19.590604   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.591019   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:19.591048   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.591210   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:19.593373   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.593677   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:19.593713   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.593863   54941 provision.go:138] copyHostCerts
	I0914 23:10:19.593924   54941 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 23:10:19.593940   54941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 23:10:19.594011   54941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 23:10:19.594131   54941 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 23:10:19.594153   54941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 23:10:19.594186   54941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 23:10:19.594295   54941 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 23:10:19.594308   54941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 23:10:19.594338   54941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 23:10:19.594420   54941 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-104104 san=[192.168.50.104 192.168.50.104 localhost 127.0.0.1 minikube custom-flannel-104104]
	I0914 23:10:19.779991   54941 provision.go:172] copyRemoteCerts
	I0914 23:10:19.780063   54941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:10:19.780089   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:19.783534   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.784017   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:19.784050   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.784345   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHPort
	I0914 23:10:19.784563   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.784753   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHUsername
	I0914 23:10:19.784925   54941 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/id_rsa Username:docker}
	I0914 23:10:19.876833   54941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 23:10:19.904092   54941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 23:10:19.927335   54941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 23:10:19.950516   54941 provision.go:86] duration metric: configureAuth took 362.757315ms
	I0914 23:10:19.950547   54941 buildroot.go:189] setting minikube options for container-runtime
	I0914 23:10:19.950769   54941 config.go:182] Loaded profile config "custom-flannel-104104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:10:19.950847   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:19.953835   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.954291   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:19.954329   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:19.954465   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHPort
	I0914 23:10:19.954689   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.954848   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:19.954985   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHUsername
	I0914 23:10:19.955179   54941 main.go:141] libmachine: Using SSH client type: native
	I0914 23:10:19.955638   54941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0914 23:10:19.955670   54941 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 23:10:20.311097   54941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 23:10:20.311180   54941 main.go:141] libmachine: Checking connection to Docker...
	I0914 23:10:20.311206   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetURL
	I0914 23:10:20.312678   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | Using libvirt version 6000000
	I0914 23:10:20.315307   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.315721   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:20.315757   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.315946   54941 main.go:141] libmachine: Docker is up and running!
	I0914 23:10:20.315964   54941 main.go:141] libmachine: Reticulating splines...
	I0914 23:10:20.315972   54941 client.go:171] LocalClient.Create took 29.765127437s
	I0914 23:10:20.315997   54941 start.go:167] duration metric: libmachine.API.Create for "custom-flannel-104104" took 29.765200765s
	I0914 23:10:20.316010   54941 start.go:300] post-start starting for "custom-flannel-104104" (driver="kvm2")
	I0914 23:10:20.316023   54941 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:10:20.316045   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .DriverName
	I0914 23:10:20.316267   54941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:10:20.316296   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:20.318925   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.319331   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:20.319397   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.319648   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHPort
	I0914 23:10:20.319841   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:20.320025   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHUsername
	I0914 23:10:20.320231   54941 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/id_rsa Username:docker}
	I0914 23:10:20.415264   54941 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:10:20.419749   54941 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 23:10:20.419777   54941 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 23:10:20.419852   54941 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 23:10:20.419979   54941 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 23:10:20.420105   54941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:10:20.428744   54941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 23:10:20.452938   54941 start.go:303] post-start completed in 136.910451ms
	I0914 23:10:20.452992   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetConfigRaw
	I0914 23:10:20.453687   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetIP
	I0914 23:10:20.456970   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.457370   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:20.457408   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.457663   54941 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/custom-flannel-104104/config.json ...
	I0914 23:10:20.457930   54941 start.go:128] duration metric: createHost completed in 29.929573484s
	I0914 23:10:20.457961   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:20.460641   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.460998   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:20.461048   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.461138   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHPort
	I0914 23:10:20.461320   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:20.461482   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:20.461657   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHUsername
	I0914 23:10:20.461844   54941 main.go:141] libmachine: Using SSH client type: native
	I0914 23:10:20.462193   54941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0914 23:10:20.462217   54941 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 23:10:20.592340   54941 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694733020.580140763
	
	I0914 23:10:20.592417   54941 fix.go:206] guest clock: 1694733020.580140763
	I0914 23:10:20.592431   54941 fix.go:219] Guest: 2023-09-14 23:10:20.580140763 +0000 UTC Remote: 2023-09-14 23:10:20.457944652 +0000 UTC m=+59.101337360 (delta=122.196111ms)
	I0914 23:10:20.592483   54941 fix.go:190] guest clock delta is within tolerance: 122.196111ms
	I0914 23:10:20.592490   54941 start.go:83] releasing machines lock for "custom-flannel-104104", held for 30.064342302s
	I0914 23:10:20.592518   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .DriverName
	I0914 23:10:20.592815   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetIP
	I0914 23:10:20.595639   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.596075   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:20.596106   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.596284   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .DriverName
	I0914 23:10:20.596849   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .DriverName
	I0914 23:10:20.597046   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .DriverName
	I0914 23:10:20.597123   54941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:10:20.597164   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:20.597269   54941 ssh_runner.go:195] Run: cat /version.json
	I0914 23:10:20.597300   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHHostname
	I0914 23:10:20.599981   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.600337   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.600369   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:20.600394   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.600537   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHPort
	I0914 23:10:20.600680   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:20.600822   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHUsername
	I0914 23:10:20.600821   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:6d:a5", ip: ""} in network mk-custom-flannel-104104: {Iface:virbr3 ExpiryTime:2023-09-15 00:10:07 +0000 UTC Type:0 Mac:52:54:00:5c:6d:a5 Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:custom-flannel-104104 Clientid:01:52:54:00:5c:6d:a5}
	I0914 23:10:20.600877   54941 main.go:141] libmachine: (custom-flannel-104104) DBG | domain custom-flannel-104104 has defined IP address 192.168.50.104 and MAC address 52:54:00:5c:6d:a5 in network mk-custom-flannel-104104
	I0914 23:10:20.600966   54941 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/id_rsa Username:docker}
	I0914 23:10:20.601066   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHPort
	I0914 23:10:20.601199   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHKeyPath
	I0914 23:10:20.601354   54941 main.go:141] libmachine: (custom-flannel-104104) Calling .GetSSHUsername
	I0914 23:10:20.601522   54941 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/custom-flannel-104104/id_rsa Username:docker}
	I0914 23:10:20.726890   54941 ssh_runner.go:195] Run: systemctl --version
	I0914 23:10:20.733606   54941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 23:10:20.900751   54941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 23:10:20.907100   54941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 23:10:20.907170   54941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:10:20.921432   54941 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 23:10:20.921465   54941 start.go:469] detecting cgroup driver to use...
	I0914 23:10:20.921582   54941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:10:20.934479   54941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:10:20.947871   54941 docker.go:196] disabling cri-docker service (if available) ...
	I0914 23:10:20.947930   54941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 23:10:20.963568   54941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 23:10:20.979268   54941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 23:10:21.114526   54941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 23:10:21.267622   54941 docker.go:212] disabling docker service ...
	I0914 23:10:21.267700   54941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 23:10:21.284858   54941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 23:10:21.297561   54941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 23:10:21.421665   54941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 23:10:21.546755   54941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 23:10:21.561524   54941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:10:21.581095   54941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 23:10:21.581168   54941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:10:21.591048   54941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 23:10:21.591140   54941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:10:21.602592   54941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:10:21.615280   54941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:10:21.627342   54941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 23:10:21.643202   54941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 23:10:21.655761   54941 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 23:10:21.655874   54941 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 23:10:21.670748   54941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 23:10:21.682331   54941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:10:21.808945   54941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 23:10:22.346427   54941 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 23:10:22.346527   54941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 23:10:22.352382   54941 start.go:537] Will wait 60s for crictl version
	I0914 23:10:22.352456   54941 ssh_runner.go:195] Run: which crictl
	I0914 23:10:22.356673   54941 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 23:10:22.390470   54941 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 23:10:22.390556   54941 ssh_runner.go:195] Run: crio --version
	I0914 23:10:22.438492   54941 ssh_runner.go:195] Run: crio --version
	I0914 23:10:22.494737   54941 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 23:10:20.724662   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:21.224691   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:21.724688   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:22.224723   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:22.724747   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:23.224688   53573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:10:23.391318   53573 kubeadm.go:1081] duration metric: took 9.961140741s to wait for elevateKubeSystemPrivileges.
	I0914 23:10:23.391354   53573 kubeadm.go:406] StartCluster complete in 25.172926355s
	I0914 23:10:23.391374   53573 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:10:23.391480   53573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 23:10:23.393736   53573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:10:23.394000   53573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 23:10:23.394141   53573 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 23:10:23.394206   53573 addons.go:69] Setting storage-provisioner=true in profile "calico-104104"
	I0914 23:10:23.394226   53573 addons.go:231] Setting addon storage-provisioner=true in "calico-104104"
	I0914 23:10:23.394256   53573 config.go:182] Loaded profile config "calico-104104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:10:23.394275   53573 host.go:66] Checking if "calico-104104" exists ...
	I0914 23:10:23.394319   53573 addons.go:69] Setting default-storageclass=true in profile "calico-104104"
	I0914 23:10:23.394335   53573 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-104104"
	I0914 23:10:23.394721   53573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:10:23.394724   53573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:10:23.394742   53573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:10:23.394755   53573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:10:23.415619   53573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46835
	I0914 23:10:23.416078   53573 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:10:23.416610   53573 main.go:141] libmachine: Using API Version  1
	I0914 23:10:23.416629   53573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:10:23.419057   53573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44539
	I0914 23:10:23.419720   53573 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:10:23.419936   53573 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:10:23.420269   53573 main.go:141] libmachine: Using API Version  1
	I0914 23:10:23.420289   53573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:10:23.420528   53573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:10:23.420557   53573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:10:23.421048   53573 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:10:23.421283   53573 main.go:141] libmachine: (calico-104104) Calling .GetState
	I0914 23:10:23.432481   53573 addons.go:231] Setting addon default-storageclass=true in "calico-104104"
	I0914 23:10:23.432529   53573 host.go:66] Checking if "calico-104104" exists ...
	I0914 23:10:23.432889   53573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:10:23.432913   53573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:10:23.443342   53573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0914 23:10:23.443759   53573 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:10:23.444328   53573 main.go:141] libmachine: Using API Version  1
	I0914 23:10:23.444346   53573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:10:23.444761   53573 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:10:23.445098   53573 main.go:141] libmachine: (calico-104104) Calling .GetState
	I0914 23:10:23.447042   53573 main.go:141] libmachine: (calico-104104) Calling .DriverName
	I0914 23:10:23.449014   53573 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:10:23.450668   53573 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:10:23.450688   53573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 23:10:23.450708   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHHostname
	I0914 23:10:23.455218   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:10:23.456096   53573 main.go:141] libmachine: (calico-104104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:77:d9", ip: ""} in network mk-calico-104104: {Iface:virbr2 ExpiryTime:2023-09-15 00:09:41 +0000 UTC Type:0 Mac:52:54:00:60:77:d9 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:calico-104104 Clientid:01:52:54:00:60:77:d9}
	I0914 23:10:23.456124   53573 main.go:141] libmachine: (calico-104104) DBG | domain calico-104104 has defined IP address 192.168.39.36 and MAC address 52:54:00:60:77:d9 in network mk-calico-104104
	I0914 23:10:23.458150   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHPort
	I0914 23:10:23.458437   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHKeyPath
	I0914 23:10:23.458636   53573 main.go:141] libmachine: (calico-104104) Calling .GetSSHUsername
	I0914 23:10:23.458885   53573 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/calico-104104/id_rsa Username:docker}
	I0914 23:10:23.466140   53573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0914 23:10:23.466758   53573 main.go:141] libmachine: () Calling .GetVersion
	I0914 23:10:23.467320   53573 main.go:141] libmachine: Using API Version  1
	I0914 23:10:23.467340   53573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 23:10:23.467779   53573 main.go:141] libmachine: () Calling .GetMachineName
	I0914 23:10:23.468189   53573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 23:10:23.468214   53573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 23:10:23.476651   53573 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-104104" context rescaled to 1 replicas
	I0914 23:10:23.476676   53573 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 23:10:23.478366   53573 out.go:177] * Verifying Kubernetes components...
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:46:33 UTC, ends at Thu 2023-09-14 23:10:25 UTC. --
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.658811347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1849b96b-3811-4658-8639-53f8a91db3bf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.659059887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1849b96b-3811-4658-8639-53f8a91db3bf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.692344436Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ff708fa5-2c44-43e7-8ebb-1b3d928ce0f9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.692661634Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9ea709bb4444541ec0e3dab990898a90b233a26eebdf05b73246815908b26f72,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-wb27t,Uid:41d83cd2-a4b5-4b49-99ac-2fa390769083,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731939633260235,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-wb27t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41d83cd2-a4b5-4b49-99ac-2fa390769083,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:52:19.307002626Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1c40fd3f-cdee-4408-87f1-c732015460c4,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731939519453400,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-14T22:52:19.185612969Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-ws5b8,Uid:8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731939365238991,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:52:17.522430953Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&PodSandboxMetadata{Name:kube-proxy-9gwgv,Uid:d702b24f-9d6e-4650-8892-0b
e54cb46991,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731937546612512,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:52:17.204568218Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-588699,Uid:a59901a40eaa5f9a78f2d9bc5208557c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731915587338311,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: a59901a40eaa5f9a78f2d9bc5208557c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a59901a40eaa5f9a78f2d9bc5208557c,kubernetes.io/config.seen: 2023-09-14T22:51:55.078627648Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-588699,Uid:38fc36a6071a7a2c7d0662f8c44c45c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731915581890055,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d0662f8c44c45c6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.205:2379,kubernetes.io/config.hash: 38fc36a6071a7a2c7d0662f8c44c45c6,kubernetes.io/config.seen: 2023-09-14T22:51:55.078619603Z,kubernetes.io/config.source: file,},Ru
ntimeHandler:,},&PodSandbox{Id:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-588699,Uid:e439c9af5f322909832e5f89900d71ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731915577668487,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e439c9af5f322909832e5f89900d71ab,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e439c9af5f322909832e5f89900d71ab,kubernetes.io/config.seen: 2023-09-14T22:51:55.078629128Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-588699,Uid:2555981d7842bbd1e687c979fbcfea59,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731915540231489,Labels
:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea59,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.205:8443,kubernetes.io/config.hash: 2555981d7842bbd1e687c979fbcfea59,kubernetes.io/config.seen: 2023-09-14T22:51:55.078625722Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=ff708fa5-2c44-43e7-8ebb-1b3d928ce0f9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.693689542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=60b42adb-71dd-4da1-a65d-9b420f7ff684 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.693780575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=60b42adb-71dd-4da1-a65d-9b420f7ff684 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.694139443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=60b42adb-71dd-4da1-a65d-9b420f7ff684 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.703923961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=023147cc-2ebe-4056-977c-6433fc8b071c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.704055234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=023147cc-2ebe-4056-977c-6433fc8b071c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.704345190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=023147cc-2ebe-4056-977c-6433fc8b071c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.749423498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6f2bccf5-93ef-425d-b2f6-63a79cfa7cdf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.749545823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6f2bccf5-93ef-425d-b2f6-63a79cfa7cdf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.749805064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6f2bccf5-93ef-425d-b2f6-63a79cfa7cdf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.795953455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fd63304d-61af-4b51-a971-10663cb08b4f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.796055483Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fd63304d-61af-4b51-a971-10663cb08b4f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.796373798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fd63304d-61af-4b51-a971-10663cb08b4f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.850712180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6461670a-7e95-4465-bae6-ce1cbe8a7271 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.850832776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6461670a-7e95-4465-bae6-ce1cbe8a7271 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.851103701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6461670a-7e95-4465-bae6-ce1cbe8a7271 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.890719133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f2d63e4c-7d4d-4a1c-b7d7-0cb858c3f0b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.890812353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f2d63e4c-7d4d-4a1c-b7d7-0cb858c3f0b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.891074414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f2d63e4c-7d4d-4a1c-b7d7-0cb858c3f0b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.923270781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=62cadf53-73d5-49a0-bf42-6ed5a3074e12 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.923412711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=62cadf53-73d5-49a0-bf42-6ed5a3074e12 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:10:25 embed-certs-588699 crio[712]: time="2023-09-14 23:10:25.923612709Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee,PodSandboxId:5e91bcbf6c9ebd9e4bb1412b683a3f896211d2f0717a84559656e17dd21c65d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731940776440733,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c40fd3f-cdee-4408-87f1-c732015460c4,},Annotations:map[string]string{io.kubernetes.container.hash: 3b1ba425,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1,PodSandboxId:1ebc35026b2aa5ffe23b35458ac6b38c422cb43c72dcc8a771fb57454784b429,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694731940250991847,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ws5b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b20fa8b-7e33-45e9-9e39-adbfbc0890a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e05add1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039,PodSandboxId:2ff8c35b50ce46e228e0892b7da59a8250e1e0ab6249c3f5ef380b40ddb8315d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694731938321699376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9gwgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d702b24f-9d6e-4650-8892-0be54cb46991,},Annotations:map[string]string{io.kubernetes.container.hash: dff81cf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6,PodSandboxId:3140b81f7dffd7ad67db77b04269e70469b15fb3c34b15ba40dcc12b1ec7afb6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694731916577360056,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e439c9af5f322909832e5f89900d71ab,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f,PodSandboxId:e8195cecec00f8b7eddca4a901a444c3cbeca28818b60d990b1463e4769e1899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694731916301823918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-588699,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: a59901a40eaa5f9a78f2d9bc5208557c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb,PodSandboxId:ee5258b32dd202c71a33647123577c55028a943f4ad3059b69cb2af893d5250a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694731916350945109,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38fc36a6071a7a2c7d066
2f8c44c45c6,},Annotations:map[string]string{io.kubernetes.container.hash: d601efb0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096,PodSandboxId:016a9a89d6a9e7eeda32eeb444d3d5f1dc3cf924f8bb4db9baa76c0e1db94819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694731916153859347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-588699,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2555981d7842bbd1e687c979fbcfea5
9,},Annotations:map[string]string{io.kubernetes.container.hash: f08f4542,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=62cadf53-73d5-49a0-bf42-6ed5a3074e12 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	cbdeed7dded6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   5e91bcbf6c9eb
	86f5cdd9f560f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 minutes ago      Running             coredns                   0                   1ebc35026b2aa
	d2724572351c0       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   18 minutes ago      Running             kube-proxy                0                   2ff8c35b50ce4
	ab7d6b33e6b39       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   18 minutes ago      Running             kube-scheduler            2                   3140b81f7dffd
	6e4522f4466d1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   18 minutes ago      Running             etcd                      2                   ee5258b32dd20
	28440a9764355       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   18 minutes ago      Running             kube-controller-manager   2                   e8195cecec00f
	e6f0ef2b040e6       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   18 minutes ago      Running             kube-apiserver            2                   016a9a89d6a9e
	
	* 
	* ==> coredns [86f5cdd9f560f159f918fe18c2e1af57738fdc05809ef2cafa667526d96285c1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44100 - 34745 "HINFO IN 5964101752069034912.334658549267858832. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008402338s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-588699
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-588699
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=embed-certs-588699
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_52_03_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:52:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-588699
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 23:10:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 23:07:41 +0000   Thu, 14 Sep 2023 22:51:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 23:07:41 +0000   Thu, 14 Sep 2023 22:51:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 23:07:41 +0000   Thu, 14 Sep 2023 22:51:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 23:07:41 +0000   Thu, 14 Sep 2023 22:52:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.205
	  Hostname:    embed-certs-588699
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 76fa946e45204ff4b777d25ef1a06f89
	  System UUID:                76fa946e-4520-4ff4-b777-d25ef1a06f89
	  Boot ID:                    25dee32c-d04d-4a7b-85ed-67595cf612f9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-ws5b8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-embed-certs-588699                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-embed-certs-588699             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-embed-certs-588699    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-9gwgv                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-embed-certs-588699             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-57f55c9bc5-wb27t               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-588699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-588699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-588699 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node embed-certs-588699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node embed-certs-588699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node embed-certs-588699 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             18m                kubelet          Node embed-certs-588699 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18m                kubelet          Node embed-certs-588699 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-588699 event: Registered Node embed-certs-588699 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep14 22:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067588] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.397768] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.731309] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.138883] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.350390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.493875] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.132967] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.170458] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.121809] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.228973] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +17.467234] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[Sep14 22:47] kauditd_printk_skb: 29 callbacks suppressed
	[Sep14 22:51] systemd-fstab-generator[3472]: Ignoring "noauto" for root device
	[Sep14 22:52] systemd-fstab-generator[3794]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [6e4522f4466d1e695db75df9d6f0bcd6bd3dda37ad982eb6aba8b0a0b268b4bb] <==
	* {"level":"info","ts":"2023-09-14T23:08:00.054435Z","caller":"traceutil/trace.go:171","msg":"trace[1047781701] linearizableReadLoop","detail":"{readStateIndex:1460; appliedIndex:1459; }","duration":"185.907778ms","start":"2023-09-14T23:07:59.868512Z","end":"2023-09-14T23:08:00.05442Z","steps":["trace[1047781701] 'read index received'  (duration: 185.807953ms)","trace[1047781701] 'applied index is now lower than readState.Index'  (duration: 99.13µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T23:08:00.05462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.118396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-09-14T23:08:00.05468Z","caller":"traceutil/trace.go:171","msg":"trace[2096997765] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1253; }","duration":"186.218317ms","start":"2023-09-14T23:07:59.868455Z","end":"2023-09-14T23:08:00.054674Z","steps":["trace[2096997765] 'agreement among raft nodes before linearized reading'  (duration: 186.098458ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T23:08:00.054884Z","caller":"traceutil/trace.go:171","msg":"trace[534823178] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"313.974345ms","start":"2023-09-14T23:07:59.740893Z","end":"2023-09-14T23:08:00.054868Z","steps":["trace[534823178] 'process raft request'  (duration: 313.348585ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T23:08:00.05671Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T23:07:59.740873Z","time spent":"314.067003ms","remote":"127.0.0.1:47360","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":683,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ihglol47zqushy3t6prpe5wt44\" mod_revision:1245 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ihglol47zqushy3t6prpe5wt44\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ihglol47zqushy3t6prpe5wt44\" > >"}
	{"level":"info","ts":"2023-09-14T23:08:00.6427Z","caller":"traceutil/trace.go:171","msg":"trace[417678460] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"117.437084ms","start":"2023-09-14T23:08:00.525236Z","end":"2023-09-14T23:08:00.642673Z","steps":["trace[417678460] 'process raft request'  (duration: 117.283783ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T23:08:34.337357Z","caller":"traceutil/trace.go:171","msg":"trace[1941148647] linearizableReadLoop","detail":"{readStateIndex:1496; appliedIndex:1495; }","duration":"210.833399ms","start":"2023-09-14T23:08:34.1265Z","end":"2023-09-14T23:08:34.337334Z","steps":["trace[1941148647] 'read index received'  (duration: 210.5877ms)","trace[1941148647] 'applied index is now lower than readState.Index'  (duration: 245.239µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T23:08:34.337711Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.157431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-09-14T23:08:34.338064Z","caller":"traceutil/trace.go:171","msg":"trace[1797253780] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1282; }","duration":"211.614911ms","start":"2023-09-14T23:08:34.126443Z","end":"2023-09-14T23:08:34.338058Z","steps":["trace[1797253780] 'agreement among raft nodes before linearized reading'  (duration: 211.096593ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T23:08:34.338019Z","caller":"traceutil/trace.go:171","msg":"trace[338839679] transaction","detail":"{read_only:false; response_revision:1282; number_of_response:1; }","duration":"635.01516ms","start":"2023-09-14T23:08:33.702991Z","end":"2023-09-14T23:08:34.338007Z","steps":["trace[338839679] 'process raft request'  (duration: 634.149346ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T23:08:34.338502Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T23:08:33.702971Z","time spent":"635.469558ms","remote":"127.0.0.1:47342","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4056,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-wb27t\" mod_revision:1039 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-wb27t\" value_size:3990 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-wb27t\" > >"}
	{"level":"warn","ts":"2023-09-14T23:09:32.429264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.703761ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10170406234903730943 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.205\" mod_revision:1321 > success:<request_put:<key:\"/registry/masterleases/192.168.61.205\" value_size:67 lease:947034198048955133 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.205\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-14T23:09:32.429514Z","caller":"traceutil/trace.go:171","msg":"trace[715319433] transaction","detail":"{read_only:false; response_revision:1329; number_of_response:1; }","duration":"257.841453ms","start":"2023-09-14T23:09:32.171646Z","end":"2023-09-14T23:09:32.429488Z","steps":["trace[715319433] 'process raft request'  (duration: 128.401471ms)","trace[715319433] 'compare'  (duration: 128.577334ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-14T23:09:34.038037Z","caller":"traceutil/trace.go:171","msg":"trace[447231946] transaction","detail":"{read_only:false; response_revision:1331; number_of_response:1; }","duration":"195.373931ms","start":"2023-09-14T23:09:33.842633Z","end":"2023-09-14T23:09:34.038007Z","steps":["trace[447231946] 'process raft request'  (duration: 195.248704ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T23:09:57.715728Z","caller":"traceutil/trace.go:171","msg":"trace[1258620520] linearizableReadLoop","detail":"{readStateIndex:1579; appliedIndex:1578; }","duration":"249.100422ms","start":"2023-09-14T23:09:57.46661Z","end":"2023-09-14T23:09:57.71571Z","steps":["trace[1258620520] 'read index received'  (duration: 248.945407ms)","trace[1258620520] 'applied index is now lower than readState.Index'  (duration: 154.207µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T23:09:57.715933Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.270212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-14T23:09:57.716002Z","caller":"traceutil/trace.go:171","msg":"trace[533510471] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1348; }","duration":"249.401939ms","start":"2023-09-14T23:09:57.466588Z","end":"2023-09-14T23:09:57.71599Z","steps":["trace[533510471] 'agreement among raft nodes before linearized reading'  (duration: 249.242432ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T23:09:57.715956Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.59131ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2023-09-14T23:09:57.716099Z","caller":"traceutil/trace.go:171","msg":"trace[1109794188] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1348; }","duration":"191.738062ms","start":"2023-09-14T23:09:57.524348Z","end":"2023-09-14T23:09:57.716086Z","steps":["trace[1109794188] 'agreement among raft nodes before linearized reading'  (duration: 191.565628ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T23:09:58.046977Z","caller":"traceutil/trace.go:171","msg":"trace[607262345] transaction","detail":"{read_only:false; response_revision:1349; number_of_response:1; }","duration":"326.440859ms","start":"2023-09-14T23:09:57.720515Z","end":"2023-09-14T23:09:58.046956Z","steps":["trace[607262345] 'process raft request'  (duration: 280.210554ms)","trace[607262345] 'compare'  (duration: 46.048875ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T23:09:58.04808Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-14T23:09:57.720502Z","time spent":"327.438278ms","remote":"127.0.0.1:47338","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1348 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-09-14T23:10:24.50378Z","caller":"traceutil/trace.go:171","msg":"trace[418708271] linearizableReadLoop","detail":"{readStateIndex:1606; appliedIndex:1605; }","duration":"210.92781ms","start":"2023-09-14T23:10:24.292833Z","end":"2023-09-14T23:10:24.503761Z","steps":["trace[418708271] 'read index received'  (duration: 210.679159ms)","trace[418708271] 'applied index is now lower than readState.Index'  (duration: 247.823µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T23:10:24.504063Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.21413ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-09-14T23:10:24.504117Z","caller":"traceutil/trace.go:171","msg":"trace[54473455] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:1370; }","duration":"211.295127ms","start":"2023-09-14T23:10:24.292802Z","end":"2023-09-14T23:10:24.504097Z","steps":["trace[54473455] 'agreement among raft nodes before linearized reading'  (duration: 211.087907ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T23:10:24.504511Z","caller":"traceutil/trace.go:171","msg":"trace[1817719522] transaction","detail":"{read_only:false; response_revision:1370; number_of_response:1; }","duration":"296.612174ms","start":"2023-09-14T23:10:24.207881Z","end":"2023-09-14T23:10:24.504493Z","steps":["trace[1817719522] 'process raft request'  (duration: 295.691654ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:10:26 up 24 min,  0 users,  load average: 0.16, 0.13, 0.10
	Linux embed-certs-588699 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e6f0ef2b040e64be69d51d960bf722fc50263e156156166e7c7173fe4644c096] <==
	* I0914 23:08:00.194481       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.66.124:443: connect: connection refused
	I0914 23:08:00.194557       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 23:08:01.334646       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:08:01.334723       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 23:08:01.334741       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 23:08:01.335995       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:08:01.336120       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:08:01.336134       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:08:34.339498       1 trace.go:236] Trace[917660932]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1251c982-00e4-44a8-aae3-9b18db36665e,client:192.168.61.205,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/metrics-server-57f55c9bc5-wb27t/status,user-agent:kubelet/v1.28.1 (linux/amd64) kubernetes/8dc49c4,verb:PATCH (14-Sep-2023 23:08:33.698) (total time: 641ms):
	Trace[917660932]: ["GuaranteedUpdate etcd3" audit-id:1251c982-00e4-44a8-aae3-9b18db36665e,key:/pods/kube-system/metrics-server-57f55c9bc5-wb27t,type:*core.Pod,resource:pods 640ms (23:08:33.698)
	Trace[917660932]:  ---"Txn call completed" 636ms (23:08:34.338)]
	Trace[917660932]: ---"Object stored in database" 637ms (23:08:34.339)
	Trace[917660932]: [641.045927ms] [641.045927ms] END
	I0914 23:09:00.193248       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.66.124:443: connect: connection refused
	I0914 23:09:00.193322       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 23:10:00.194362       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.66.124:443: connect: connection refused
	I0914 23:10:00.194439       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 23:10:01.335903       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:10:01.336096       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 23:10:01.336137       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 23:10:01.336484       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:10:01.336650       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:10:01.338206       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [28440a9764355bf67acc74d22ddca776be602edd0d69633b23b7514d3c1a0e5f] <==
	* I0914 23:04:47.042387       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:05:16.522586       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:05:17.051852       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:05:46.531556       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:05:47.060504       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:06:16.538669       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:06:17.069522       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:06:46.544547       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:06:47.079271       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:07:16.550533       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:07:17.088736       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:07:46.556803       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:07:47.097685       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:08:16.564495       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:08:17.111059       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 23:08:34.342648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="500.538µs"
	I0914 23:08:45.716907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="156.706µs"
	E0914 23:08:46.571946       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:08:47.121796       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:09:16.580485       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:09:17.131121       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:09:46.587151       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:09:47.139528       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:10:16.594258       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:10:17.149952       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [d2724572351c0c9508b37f61c35fbac205008cf045e0c516955b2046a597a039] <==
	* I0914 22:52:19.163206       1 server_others.go:69] "Using iptables proxy"
	I0914 22:52:19.238652       1 node.go:141] Successfully retrieved node IP: 192.168.61.205
	I0914 22:52:19.431852       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:52:19.432112       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:52:19.445950       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:52:19.446354       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:52:19.446534       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:52:19.446725       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:52:19.448108       1 config.go:188] "Starting service config controller"
	I0914 22:52:19.448149       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:52:19.448310       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:52:19.448458       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:52:19.449118       1 config.go:315] "Starting node config controller"
	I0914 22:52:19.449260       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:52:19.552046       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:52:19.554875       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 22:52:19.568287       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ab7d6b33e6b395326d1d6a962ca615ed81ea922d4e5403030bb9835b275c2fb6] <==
	* W0914 22:52:00.344812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 22:52:00.344846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0914 22:52:00.345127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:00.345726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 22:52:00.346224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:00.346411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 22:52:00.346535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 22:52:00.346571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0914 22:52:00.346636       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:00.346675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 22:52:00.346921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:52:00.347102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 22:52:00.347004       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:52:00.347454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 22:52:00.347047       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:52:00.347692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 22:52:01.402497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:01.402628       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 22:52:01.439823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:01.439949       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 22:52:01.586523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:52:01.586576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 22:52:01.825568       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 22:52:01.825631       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0914 22:52:03.633561       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:46:33 UTC, ends at Thu 2023-09-14 23:10:26 UTC. --
	Sep 14 23:08:03 embed-certs-588699 kubelet[3801]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:08:03 embed-certs-588699 kubelet[3801]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:08:03 embed-certs-588699 kubelet[3801]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:08:04 embed-certs-588699 kubelet[3801]: E0914 23:08:04.693393    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:08:18 embed-certs-588699 kubelet[3801]: E0914 23:08:18.704497    3801 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 14 23:08:18 embed-certs-588699 kubelet[3801]: E0914 23:08:18.704547    3801 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 14 23:08:18 embed-certs-588699 kubelet[3801]: E0914 23:08:18.704786    3801 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bc98n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-wb27t_kube-system(41d83cd2-a4b5-4b49-99ac-2fa390769083): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 14 23:08:18 embed-certs-588699 kubelet[3801]: E0914 23:08:18.704825    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:08:33 embed-certs-588699 kubelet[3801]: E0914 23:08:33.693362    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:08:45 embed-certs-588699 kubelet[3801]: E0914 23:08:45.693765    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:08:57 embed-certs-588699 kubelet[3801]: E0914 23:08:57.693808    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:09:03 embed-certs-588699 kubelet[3801]: E0914 23:09:03.729332    3801 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 23:09:03 embed-certs-588699 kubelet[3801]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:09:03 embed-certs-588699 kubelet[3801]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:09:03 embed-certs-588699 kubelet[3801]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:09:09 embed-certs-588699 kubelet[3801]: E0914 23:09:09.693127    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:09:21 embed-certs-588699 kubelet[3801]: E0914 23:09:21.692475    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:09:34 embed-certs-588699 kubelet[3801]: E0914 23:09:34.692671    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:09:49 embed-certs-588699 kubelet[3801]: E0914 23:09:49.693467    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:10:02 embed-certs-588699 kubelet[3801]: E0914 23:10:02.692441    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	Sep 14 23:10:03 embed-certs-588699 kubelet[3801]: E0914 23:10:03.729304    3801 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 23:10:03 embed-certs-588699 kubelet[3801]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:10:03 embed-certs-588699 kubelet[3801]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:10:03 embed-certs-588699 kubelet[3801]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:10:17 embed-certs-588699 kubelet[3801]: E0914 23:10:17.692638    3801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wb27t" podUID="41d83cd2-a4b5-4b49-99ac-2fa390769083"
	
	* 
	* ==> storage-provisioner [cbdeed7dded6ffbae2d1c577a557632c524de611a812c77034d6ec6db604caee] <==
	* I0914 22:52:20.950437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:52:20.962893       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:52:20.963195       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:52:20.972727       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:52:20.973584       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-588699_308c6d6c-7d33-4cae-b328-30579a567551!
	I0914 22:52:20.972964       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2391c686-c332-4acf-99d9-c85e2955dd08", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-588699_308c6d6c-7d33-4cae-b328-30579a567551 became leader
	I0914 22:52:21.074612       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-588699_308c6d6c-7d33-4cae-b328-30579a567551!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-588699 -n embed-certs-588699
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-588699 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wb27t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-588699 describe pod metrics-server-57f55c9bc5-wb27t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-588699 describe pod metrics-server-57f55c9bc5-wb27t: exit status 1 (77.84372ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wb27t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-588699 describe pod metrics-server-57f55c9bc5-wb27t: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (329.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0914 23:01:36.474940   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 23:02:59.522942   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-344363 -n no-preload-344363
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-14 23:06:57.278198638 +0000 UTC m=+5439.472540284
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-344363 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-344363 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.69µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-344363 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-344363 -n no-preload-344363
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-344363 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-344363 logs -n 25: (1.869416606s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-711912                           | kubernetes-upgrade-711912    | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:36 UTC |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-344363             | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:40 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799144  | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC |                     |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-344363                  | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-588699            | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799144       | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-930717        | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:51 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-588699                 | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-930717             | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC | 14 Sep 23 22:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:45:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:45:20.513575   46713 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:45:20.513835   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.513847   46713 out.go:309] Setting ErrFile to fd 2...
	I0914 22:45:20.513852   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.514030   46713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:45:20.514571   46713 out.go:303] Setting JSON to false
	I0914 22:45:20.515550   46713 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5263,"bootTime":1694726258,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:45:20.515607   46713 start.go:138] virtualization: kvm guest
	I0914 22:45:20.517738   46713 out.go:177] * [old-k8s-version-930717] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:45:20.519301   46713 notify.go:220] Checking for updates...
	I0914 22:45:20.519309   46713 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:45:20.520886   46713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:45:20.522525   46713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:45:20.524172   46713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:45:20.525826   46713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:45:20.527204   46713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:45:20.529068   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:45:20.529489   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.529542   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.548088   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0914 22:45:20.548488   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.548969   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.548985   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.549404   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.549555   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.551507   46713 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 22:45:20.552878   46713 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:45:20.553145   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.553176   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.566825   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0914 22:45:20.567181   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.567617   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.567646   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.568018   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.568195   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.601886   46713 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:45:20.603176   46713 start.go:298] selected driver: kvm2
	I0914 22:45:20.603188   46713 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.603284   46713 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:45:20.603926   46713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.603997   46713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:45:20.617678   46713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:45:20.618009   46713 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:45:20.618045   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:45:20.618062   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:45:20.618075   46713 start_flags.go:321] config:
	{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.618204   46713 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.619892   46713 out.go:177] * Starting control plane node old-k8s-version-930717 in cluster old-k8s-version-930717
	I0914 22:45:22.939748   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:20.621146   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:45:20.621171   46713 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0914 22:45:20.621184   46713 cache.go:57] Caching tarball of preloaded images
	I0914 22:45:20.621265   46713 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 22:45:20.621286   46713 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0914 22:45:20.621381   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:45:20.621551   46713 start.go:365] acquiring machines lock for old-k8s-version-930717: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:45:29.019730   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:32.091705   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:38.171724   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:41.243661   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:47.323733   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:50.395751   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:56.475703   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:59.547782   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:46:02.551591   45954 start.go:369] acquired machines lock for "default-k8s-diff-port-799144" in 3m15.018428257s
	I0914 22:46:02.551631   45954 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:02.551642   45954 fix.go:54] fixHost starting: 
	I0914 22:46:02.551944   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:02.551972   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:02.566520   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0914 22:46:02.566922   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:02.567373   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:02.567392   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:02.567734   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:02.567961   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:02.568128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:02.569692   45954 fix.go:102] recreateIfNeeded on default-k8s-diff-port-799144: state=Stopped err=<nil>
	I0914 22:46:02.569714   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	W0914 22:46:02.569887   45954 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:02.571684   45954 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799144" ...
	I0914 22:46:02.549458   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:02.549490   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:46:02.551419   45407 machine.go:91] provisioned docker machine in 4m37.435317847s
	I0914 22:46:02.551457   45407 fix.go:56] fixHost completed within 4m37.455553972s
	I0914 22:46:02.551462   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 4m37.455581515s
	W0914 22:46:02.551502   45407 start.go:688] error starting host: provision: host is not running
	W0914 22:46:02.551586   45407 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0914 22:46:02.551600   45407 start.go:703] Will try again in 5 seconds ...
	I0914 22:46:02.573354   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Start
	I0914 22:46:02.573535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring networks are active...
	I0914 22:46:02.574326   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network default is active
	I0914 22:46:02.574644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network mk-default-k8s-diff-port-799144 is active
	I0914 22:46:02.575046   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Getting domain xml...
	I0914 22:46:02.575767   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Creating domain...
	I0914 22:46:03.792613   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting to get IP...
	I0914 22:46:03.793573   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.793932   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.794029   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:03.793928   46868 retry.go:31] will retry after 250.767464ms: waiting for machine to come up
	I0914 22:46:04.046447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046928   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.046853   46868 retry.go:31] will retry after 320.29371ms: waiting for machine to come up
	I0914 22:46:04.368383   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368782   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368814   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.368726   46868 retry.go:31] will retry after 295.479496ms: waiting for machine to come up
	I0914 22:46:04.666192   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666655   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666680   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.666595   46868 retry.go:31] will retry after 572.033699ms: waiting for machine to come up
	I0914 22:46:05.240496   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240920   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240953   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.240872   46868 retry.go:31] will retry after 493.557238ms: waiting for machine to come up
	I0914 22:46:05.735682   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736201   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.736150   46868 retry.go:31] will retry after 848.645524ms: waiting for machine to come up
	I0914 22:46:06.586116   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586568   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:06.586473   46868 retry.go:31] will retry after 866.110647ms: waiting for machine to come up
	I0914 22:46:07.553803   45407 start.go:365] acquiring machines lock for no-preload-344363: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:46:07.454431   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454798   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454827   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:07.454743   46868 retry.go:31] will retry after 1.485337575s: waiting for machine to come up
	I0914 22:46:08.941761   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942136   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942177   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:08.942104   46868 retry.go:31] will retry after 1.640651684s: waiting for machine to come up
	I0914 22:46:10.584576   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584939   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:10.584838   46868 retry.go:31] will retry after 1.656716681s: waiting for machine to come up
	I0914 22:46:12.243599   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244096   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:12.244037   46868 retry.go:31] will retry after 2.692733224s: waiting for machine to come up
	I0914 22:46:14.939726   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940035   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940064   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:14.939986   46868 retry.go:31] will retry after 2.745837942s: waiting for machine to come up
	I0914 22:46:22.180177   46412 start.go:369] acquired machines lock for "embed-certs-588699" in 2m3.238409394s
	I0914 22:46:22.180244   46412 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:22.180256   46412 fix.go:54] fixHost starting: 
	I0914 22:46:22.180661   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:22.180706   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:22.196558   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0914 22:46:22.196900   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:22.197304   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:46:22.197326   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:22.197618   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:22.197808   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:22.197986   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:46:22.199388   46412 fix.go:102] recreateIfNeeded on embed-certs-588699: state=Stopped err=<nil>
	I0914 22:46:22.199423   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	W0914 22:46:22.199595   46412 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:22.202757   46412 out.go:177] * Restarting existing kvm2 VM for "embed-certs-588699" ...
	I0914 22:46:17.687397   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687911   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687937   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:17.687878   46868 retry.go:31] will retry after 3.174192278s: waiting for machine to come up
	I0914 22:46:20.866173   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866687   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Found IP for machine: 192.168.50.175
	I0914 22:46:20.866722   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has current primary IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866737   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserving static IP address...
	I0914 22:46:20.867209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.867245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | skip adding static IP to network mk-default-k8s-diff-port-799144 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"}
	I0914 22:46:20.867263   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserved static IP address: 192.168.50.175
	I0914 22:46:20.867290   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for SSH to be available...
	I0914 22:46:20.867303   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Getting to WaitForSSH function...
	I0914 22:46:20.869597   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.869960   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.869993   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.870103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH client type: external
	I0914 22:46:20.870137   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa (-rw-------)
	I0914 22:46:20.870193   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:20.870218   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | About to run SSH command:
	I0914 22:46:20.870237   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | exit 0
	I0914 22:46:20.959125   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:20.959456   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetConfigRaw
	I0914 22:46:20.960082   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:20.962512   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.962889   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.962915   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.963114   45954 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/config.json ...
	I0914 22:46:20.963282   45954 machine.go:88] provisioning docker machine ...
	I0914 22:46:20.963300   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:20.963509   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963682   45954 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799144"
	I0914 22:46:20.963709   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963899   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:20.966359   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966728   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.966757   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966956   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:20.967146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967287   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967420   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:20.967584   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:20.967963   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:20.967983   45954 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799144 && echo "default-k8s-diff-port-799144" | sudo tee /etc/hostname
	I0914 22:46:21.098114   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799144
	
	I0914 22:46:21.098158   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.100804   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101167   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.101208   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.101532   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101855   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.102028   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.102386   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.102406   45954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799144/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:21.225929   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:21.225964   45954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:21.225992   45954 buildroot.go:174] setting up certificates
	I0914 22:46:21.226007   45954 provision.go:83] configureAuth start
	I0914 22:46:21.226023   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:21.226299   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:21.229126   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229514   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.229555   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.231683   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.231992   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.232027   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.232179   45954 provision.go:138] copyHostCerts
	I0914 22:46:21.232233   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:21.232247   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:21.232321   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:21.232412   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:21.232421   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:21.232446   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:21.232542   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:21.232551   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:21.232572   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:21.232617   45954 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799144 san=[192.168.50.175 192.168.50.175 localhost 127.0.0.1 minikube default-k8s-diff-port-799144]
	I0914 22:46:21.489180   45954 provision.go:172] copyRemoteCerts
	I0914 22:46:21.489234   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:21.489257   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.491989   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492308   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.492334   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.492734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.492869   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.493038   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:21.579991   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0914 22:46:21.599819   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:21.619391   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:21.638607   45954 provision.go:86] duration metric: configureAuth took 412.585328ms
	I0914 22:46:21.638629   45954 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:21.638797   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:21.638867   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.641693   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642033   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.642067   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.642399   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642562   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.642900   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.643239   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.643257   45954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:21.928913   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:21.928940   45954 machine.go:91] provisioned docker machine in 965.645328ms
	I0914 22:46:21.928952   45954 start.go:300] post-start starting for "default-k8s-diff-port-799144" (driver="kvm2")
	I0914 22:46:21.928964   45954 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:21.928987   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:21.929377   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:21.929425   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.931979   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932350   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.932388   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932475   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.932704   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.932923   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.933059   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.020329   45954 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:22.024444   45954 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:22.024458   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:22.024513   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:22.024589   45954 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:22.024672   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:22.033456   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:22.054409   45954 start.go:303] post-start completed in 125.445528ms
	I0914 22:46:22.054427   45954 fix.go:56] fixHost completed within 19.502785226s
	I0914 22:46:22.054444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.057353   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057690   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.057721   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057925   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.058139   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058304   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058483   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.058657   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:22.059051   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:22.059065   45954 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:22.180023   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731582.133636857
	
	I0914 22:46:22.180044   45954 fix.go:206] guest clock: 1694731582.133636857
	I0914 22:46:22.180054   45954 fix.go:219] Guest: 2023-09-14 22:46:22.133636857 +0000 UTC Remote: 2023-09-14 22:46:22.054430307 +0000 UTC m=+214.661061156 (delta=79.20655ms)
	I0914 22:46:22.180078   45954 fix.go:190] guest clock delta is within tolerance: 79.20655ms
	I0914 22:46:22.180084   45954 start.go:83] releasing machines lock for "default-k8s-diff-port-799144", held for 19.628473828s
	I0914 22:46:22.180114   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.180408   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:22.183182   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183507   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.183543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183675   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184175   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184384   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184494   45954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:22.184535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.184627   45954 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:22.184662   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.187447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187604   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187813   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.187839   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187971   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.187986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.188024   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.188151   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.188153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188344   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188391   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188500   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.188519   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188618   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.303009   45954 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:22.308185   45954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:22.450504   45954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:22.455642   45954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:22.455700   45954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:22.468430   45954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:22.468453   45954 start.go:469] detecting cgroup driver to use...
	I0914 22:46:22.468509   45954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:22.483524   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:22.494650   45954 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:22.494706   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:22.506589   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:22.518370   45954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:22.619545   45954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:22.737486   45954 docker.go:212] disabling docker service ...
	I0914 22:46:22.737551   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:22.749267   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:22.759866   45954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:22.868561   45954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:22.973780   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:22.986336   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:23.004987   45954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:23.005042   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.013821   45954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:23.013889   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.022487   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.030875   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.038964   45954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:23.047246   45954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:23.054339   45954 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:23.054379   45954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:23.066649   45954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:23.077024   45954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:23.174635   45954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:23.337031   45954 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:23.337113   45954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:23.342241   45954 start.go:537] Will wait 60s for crictl version
	I0914 22:46:23.342308   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:46:23.345832   45954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:23.377347   45954 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:23.377433   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.425559   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.492770   45954 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:22.203936   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Start
	I0914 22:46:22.204098   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring networks are active...
	I0914 22:46:22.204740   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network default is active
	I0914 22:46:22.205158   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network mk-embed-certs-588699 is active
	I0914 22:46:22.205524   46412 main.go:141] libmachine: (embed-certs-588699) Getting domain xml...
	I0914 22:46:22.206216   46412 main.go:141] libmachine: (embed-certs-588699) Creating domain...
	I0914 22:46:23.529479   46412 main.go:141] libmachine: (embed-certs-588699) Waiting to get IP...
	I0914 22:46:23.530274   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.530639   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.530694   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.530608   46986 retry.go:31] will retry after 299.617651ms: waiting for machine to come up
	I0914 22:46:23.494065   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:23.496974   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497458   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:23.497490   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497694   45954 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:23.501920   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:23.517500   45954 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:23.517542   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:23.554344   45954 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:23.554403   45954 ssh_runner.go:195] Run: which lz4
	I0914 22:46:23.558745   45954 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:23.563443   45954 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:23.563488   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:25.365372   45954 crio.go:444] Took 1.806660 seconds to copy over tarball
	I0914 22:46:25.365442   45954 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:23.832332   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.833457   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.833488   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.832911   46986 retry.go:31] will retry after 315.838121ms: waiting for machine to come up
	I0914 22:46:24.150532   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.150980   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.151009   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.150942   46986 retry.go:31] will retry after 369.928332ms: waiting for machine to come up
	I0914 22:46:24.522720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.523232   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.523257   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.523145   46986 retry.go:31] will retry after 533.396933ms: waiting for machine to come up
	I0914 22:46:25.057818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.058371   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.058405   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.058318   46986 retry.go:31] will retry after 747.798377ms: waiting for machine to come up
	I0914 22:46:25.807422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.807912   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.807956   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.807874   46986 retry.go:31] will retry after 947.037376ms: waiting for machine to come up
	I0914 22:46:26.756214   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:26.756720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:26.756757   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:26.756689   46986 retry.go:31] will retry after 1.117164865s: waiting for machine to come up
	I0914 22:46:27.875432   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:27.875931   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:27.875953   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:27.875886   46986 retry.go:31] will retry after 1.117181084s: waiting for machine to come up
	I0914 22:46:28.197684   45954 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.832216899s)
	I0914 22:46:28.197710   45954 crio.go:451] Took 2.832313 seconds to extract the tarball
	I0914 22:46:28.197718   45954 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:28.236545   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:28.286349   45954 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:28.286374   45954 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:28.286449   45954 ssh_runner.go:195] Run: crio config
	I0914 22:46:28.344205   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:28.344231   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:28.344253   45954 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:28.344289   45954 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.175 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799144 NodeName:default-k8s-diff-port-799144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:28.344454   45954 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.175
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799144"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:28.344536   45954 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-799144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0914 22:46:28.344591   45954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:28.354383   45954 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:28.354459   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:28.363277   45954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0914 22:46:28.378875   45954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:28.393535   45954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0914 22:46:28.408319   45954 ssh_runner.go:195] Run: grep 192.168.50.175	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:28.411497   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:28.421507   45954 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144 for IP: 192.168.50.175
	I0914 22:46:28.421536   45954 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:28.421702   45954 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:28.421742   45954 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:28.421805   45954 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.key
	I0914 22:46:28.421858   45954 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key.0216c1e7
	I0914 22:46:28.421894   45954 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key
	I0914 22:46:28.421994   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:28.422020   45954 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:28.422027   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:28.422048   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:28.422074   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:28.422095   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:28.422139   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:28.422695   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:28.443528   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:46:28.463679   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:28.483317   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:28.503486   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:28.523709   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:28.544539   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:28.565904   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:28.587316   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:28.611719   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:28.632158   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:28.652227   45954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:28.667709   45954 ssh_runner.go:195] Run: openssl version
	I0914 22:46:28.673084   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:28.682478   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686693   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686747   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.691836   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:28.701203   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:28.710996   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715353   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715408   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.720765   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:28.730750   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:28.740782   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745186   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745250   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.750589   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:28.760675   45954 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:28.764920   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:28.770573   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:28.776098   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:28.783455   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:28.790699   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:28.797514   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:28.804265   45954 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-799144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:28.804376   45954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:28.804427   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:28.833994   45954 cri.go:89] found id: ""
	I0914 22:46:28.834051   45954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:28.843702   45954 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:28.843724   45954 kubeadm.go:636] restartCluster start
	I0914 22:46:28.843769   45954 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:28.852802   45954 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.854420   45954 kubeconfig.go:92] found "default-k8s-diff-port-799144" server: "https://192.168.50.175:8444"
	I0914 22:46:28.858058   45954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:28.866914   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.866968   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.877946   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.877969   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.878014   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.888579   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.389311   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.389420   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.401725   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.889346   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.889451   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.902432   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.388985   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.389062   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.401302   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.888853   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.888949   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.901032   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.389622   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.389733   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.405102   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.888685   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.888803   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.904300   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:32.388876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.388944   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.402419   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.995080   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:28.999205   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:28.999224   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:28.995414   46986 retry.go:31] will retry after 1.657878081s: waiting for machine to come up
	I0914 22:46:30.655422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:30.656029   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:30.656059   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:30.655960   46986 retry.go:31] will retry after 2.320968598s: waiting for machine to come up
	I0914 22:46:32.978950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:32.979423   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:32.979452   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:32.979369   46986 retry.go:31] will retry after 2.704173643s: waiting for machine to come up
	I0914 22:46:32.889585   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.889658   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.902514   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.388806   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.388906   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.405028   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.889633   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.889728   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.906250   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.388736   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.388810   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.403376   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.888851   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.888934   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.905873   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.389446   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.389516   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.404872   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.889475   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.889569   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.902431   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.388954   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.389054   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.401778   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.889442   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.889529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.902367   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:37.388925   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.389009   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.401860   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.685608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:35.686027   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:35.686064   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:35.685964   46986 retry.go:31] will retry after 2.240780497s: waiting for machine to come up
	I0914 22:46:37.928020   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:37.928402   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:37.928442   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:37.928354   46986 retry.go:31] will retry after 2.734049647s: waiting for machine to come up
	I0914 22:46:41.860186   46713 start.go:369] acquired machines lock for "old-k8s-version-930717" in 1m21.238611742s
	I0914 22:46:41.860234   46713 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:41.860251   46713 fix.go:54] fixHost starting: 
	I0914 22:46:41.860683   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:41.860738   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:41.877474   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0914 22:46:41.877964   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:41.878542   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:46:41.878568   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:41.878874   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:41.879057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:46:41.879276   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:46:41.880990   46713 fix.go:102] recreateIfNeeded on old-k8s-version-930717: state=Stopped err=<nil>
	I0914 22:46:41.881019   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	W0914 22:46:41.881175   46713 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:41.883128   46713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-930717" ...
	I0914 22:46:37.888876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.888950   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.901522   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.389056   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:38.389140   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:38.400632   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.867426   45954 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:38.867461   45954 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:38.867487   45954 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:38.867557   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:38.898268   45954 cri.go:89] found id: ""
	I0914 22:46:38.898328   45954 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:38.914871   45954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:38.924737   45954 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:38.924785   45954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934436   45954 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934455   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.042672   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.982954   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.158791   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.235541   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.312855   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:46:40.312926   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.328687   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.842859   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.343019   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.842336   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.342351   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.665315   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.665775   46412 main.go:141] libmachine: (embed-certs-588699) Found IP for machine: 192.168.61.205
	I0914 22:46:40.665795   46412 main.go:141] libmachine: (embed-certs-588699) Reserving static IP address...
	I0914 22:46:40.665807   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has current primary IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.666273   46412 main.go:141] libmachine: (embed-certs-588699) Reserved static IP address: 192.168.61.205
	I0914 22:46:40.666316   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.666334   46412 main.go:141] libmachine: (embed-certs-588699) Waiting for SSH to be available...
	I0914 22:46:40.666375   46412 main.go:141] libmachine: (embed-certs-588699) DBG | skip adding static IP to network mk-embed-certs-588699 - found existing host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"}
	I0914 22:46:40.666401   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Getting to WaitForSSH function...
	I0914 22:46:40.668206   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668515   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.668542   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668654   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH client type: external
	I0914 22:46:40.668689   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa (-rw-------)
	I0914 22:46:40.668716   46412 main.go:141] libmachine: (embed-certs-588699) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:40.668728   46412 main.go:141] libmachine: (embed-certs-588699) DBG | About to run SSH command:
	I0914 22:46:40.668736   46412 main.go:141] libmachine: (embed-certs-588699) DBG | exit 0
	I0914 22:46:40.751202   46412 main.go:141] libmachine: (embed-certs-588699) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:40.751584   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetConfigRaw
	I0914 22:46:40.752291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:40.754685   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755054   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.755087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755318   46412 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/config.json ...
	I0914 22:46:40.755578   46412 machine.go:88] provisioning docker machine ...
	I0914 22:46:40.755603   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:40.755799   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.755940   46412 buildroot.go:166] provisioning hostname "embed-certs-588699"
	I0914 22:46:40.755959   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.756109   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.758111   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758435   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.758481   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758547   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.758686   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758798   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758983   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.759108   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.759567   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.759586   46412 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-588699 && echo "embed-certs-588699" | sudo tee /etc/hostname
	I0914 22:46:40.882559   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-588699
	
	I0914 22:46:40.882615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.885741   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.886137   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886403   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.886635   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886810   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886964   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.887176   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.887633   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.887662   46412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-588699' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-588699/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-588699' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:41.007991   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:41.008024   46412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:41.008075   46412 buildroot.go:174] setting up certificates
	I0914 22:46:41.008103   46412 provision.go:83] configureAuth start
	I0914 22:46:41.008118   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:41.008615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.011893   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012262   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.012295   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012467   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.014904   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015343   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.015378   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015551   46412 provision.go:138] copyHostCerts
	I0914 22:46:41.015605   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:41.015618   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:41.015691   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:41.015847   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:41.015864   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:41.015897   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:41.015979   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:41.015989   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:41.016019   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:41.016080   46412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.embed-certs-588699 san=[192.168.61.205 192.168.61.205 localhost 127.0.0.1 minikube embed-certs-588699]
	I0914 22:46:41.134486   46412 provision.go:172] copyRemoteCerts
	I0914 22:46:41.134537   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:41.134559   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.137472   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137789   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.137818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137995   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.138216   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.138365   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.138536   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.224196   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:41.244551   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:46:41.267745   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:41.292472   46412 provision.go:86] duration metric: configureAuth took 284.355734ms
	I0914 22:46:41.292497   46412 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:41.292668   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:41.292748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.295661   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296010   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.296042   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296246   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.296469   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296652   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296836   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.297031   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.297522   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.297556   46412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:41.609375   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:41.609417   46412 machine.go:91] provisioned docker machine in 853.82264ms
	I0914 22:46:41.609431   46412 start.go:300] post-start starting for "embed-certs-588699" (driver="kvm2")
	I0914 22:46:41.609444   46412 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:41.609472   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.609831   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:41.609890   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.613037   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613497   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.613525   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613662   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.613854   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.614023   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.614142   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.704618   46412 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:41.709759   46412 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:41.709787   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:41.709867   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:41.709991   46412 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:41.710127   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:41.721261   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:41.742359   46412 start.go:303] post-start completed in 132.913862ms
	I0914 22:46:41.742387   46412 fix.go:56] fixHost completed within 19.562130605s
	I0914 22:46:41.742418   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.745650   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.746172   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746369   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.746564   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746781   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746944   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.747138   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.747629   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.747648   46412 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:41.860006   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731601.811427748
	
	I0914 22:46:41.860030   46412 fix.go:206] guest clock: 1694731601.811427748
	I0914 22:46:41.860040   46412 fix.go:219] Guest: 2023-09-14 22:46:41.811427748 +0000 UTC Remote: 2023-09-14 22:46:41.742391633 +0000 UTC m=+142.955285980 (delta=69.036115ms)
	I0914 22:46:41.860091   46412 fix.go:190] guest clock delta is within tolerance: 69.036115ms
	I0914 22:46:41.860098   46412 start.go:83] releasing machines lock for "embed-certs-588699", held for 19.679882828s
	I0914 22:46:41.860131   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.860411   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.863136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863584   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.863618   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863721   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864206   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864398   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864477   46412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:41.864514   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.864639   46412 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:41.864666   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.867568   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.867976   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.868028   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868147   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868248   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868373   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868579   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.868691   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868833   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.868876   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.869026   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.980624   46412 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:41.986113   46412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:42.134956   46412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:42.141030   46412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:42.141101   46412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:42.158635   46412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:42.158660   46412 start.go:469] detecting cgroup driver to use...
	I0914 22:46:42.158722   46412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:42.173698   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:42.184948   46412 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:42.185007   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:42.196434   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:42.208320   46412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:42.326624   46412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:42.459498   46412 docker.go:212] disabling docker service ...
	I0914 22:46:42.459567   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:42.472479   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:42.486651   46412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:42.636161   46412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:42.739841   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:42.758562   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:42.779404   46412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:42.779472   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.787902   46412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:42.787954   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.799513   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.811428   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.823348   46412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:42.835569   46412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:42.842820   46412 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:42.842885   46412 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:42.855225   46412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:42.863005   46412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:42.979756   46412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:43.181316   46412 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:43.181384   46412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:43.191275   46412 start.go:537] Will wait 60s for crictl version
	I0914 22:46:43.191343   46412 ssh_runner.go:195] Run: which crictl
	I0914 22:46:43.196264   46412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:43.228498   46412 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:43.228589   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.281222   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.341816   46412 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:43.343277   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:43.346473   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.346835   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:43.346882   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.347084   46412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:43.351205   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:43.364085   46412 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:43.364156   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:43.400558   46412 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:43.400634   46412 ssh_runner.go:195] Run: which lz4
	I0914 22:46:43.404906   46412 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:43.409239   46412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:43.409277   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:41.885236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Start
	I0914 22:46:41.885399   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring networks are active...
	I0914 22:46:41.886125   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network default is active
	I0914 22:46:41.886511   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network mk-old-k8s-version-930717 is active
	I0914 22:46:41.886855   46713 main.go:141] libmachine: (old-k8s-version-930717) Getting domain xml...
	I0914 22:46:41.887524   46713 main.go:141] libmachine: (old-k8s-version-930717) Creating domain...
	I0914 22:46:43.317748   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting to get IP...
	I0914 22:46:43.318757   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.319197   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.319288   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.319176   47160 retry.go:31] will retry after 287.487011ms: waiting for machine to come up
	I0914 22:46:43.608890   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.609712   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.609738   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.609656   47160 retry.go:31] will retry after 289.187771ms: waiting for machine to come up
	I0914 22:46:43.900234   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.900655   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.900679   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.900576   47160 retry.go:31] will retry after 433.007483ms: waiting for machine to come up
	I0914 22:46:44.335318   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.335775   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.335804   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.335727   47160 retry.go:31] will retry after 383.295397ms: waiting for machine to come up
	I0914 22:46:44.720415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.720967   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.721001   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.720856   47160 retry.go:31] will retry after 698.454643ms: waiting for machine to come up
	I0914 22:46:45.420833   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:45.421349   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:45.421391   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:45.421297   47160 retry.go:31] will retry after 938.590433ms: waiting for machine to come up
	I0914 22:46:42.842954   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.867206   45954 api_server.go:72] duration metric: took 2.554352134s to wait for apiserver process to appear ...
	I0914 22:46:42.867238   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:46:42.867257   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.755748   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:46:46.755780   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:46:46.755832   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.873209   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:46.873243   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.373637   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.391311   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.391349   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.873646   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.880286   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.880323   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:48.373423   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:48.389682   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:46:48.415694   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:46:48.415727   45954 api_server.go:131] duration metric: took 5.548481711s to wait for apiserver health ...
	I0914 22:46:48.415739   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.415748   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.417375   45954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:46:45.238555   46412 crio.go:444] Took 1.833681 seconds to copy over tarball
	I0914 22:46:45.238634   46412 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:48.251155   46412 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012492519s)
	I0914 22:46:48.251176   46412 crio.go:451] Took 3.012596 seconds to extract the tarball
	I0914 22:46:48.251184   46412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:48.290336   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:48.338277   46412 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:48.338302   46412 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:48.338378   46412 ssh_runner.go:195] Run: crio config
	I0914 22:46:48.402542   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.402564   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.402583   46412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:48.402604   46412 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.205 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-588699 NodeName:embed-certs-588699 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:48.402791   46412 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-588699"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:48.402883   46412 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-588699 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:46:48.402958   46412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:48.414406   46412 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:48.414484   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:48.426437   46412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0914 22:46:48.445351   46412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:48.463696   46412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0914 22:46:48.481887   46412 ssh_runner.go:195] Run: grep 192.168.61.205	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:48.485825   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:48.500182   46412 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699 for IP: 192.168.61.205
	I0914 22:46:48.500215   46412 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:48.500362   46412 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:48.500417   46412 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:48.500514   46412 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/client.key
	I0914 22:46:48.500600   46412 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key.8dac69f7
	I0914 22:46:48.500726   46412 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key
	I0914 22:46:48.500885   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:48.500926   46412 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:48.500942   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:48.500976   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:48.501008   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:48.501039   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:48.501096   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:48.501918   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:48.528790   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:46:48.558557   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:48.583664   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:48.608274   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:48.631638   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:48.655163   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:48.677452   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:48.700443   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:48.724547   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:48.751559   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:48.778910   46412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:48.794369   46412 ssh_runner.go:195] Run: openssl version
	I0914 22:46:48.799778   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:48.809263   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814790   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814848   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.820454   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:48.829942   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:46.361228   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:46.361816   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:46.361846   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:46.361795   47160 retry.go:31] will retry after 1.00738994s: waiting for machine to come up
	I0914 22:46:47.370525   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:47.370964   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:47.370991   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:47.370921   47160 retry.go:31] will retry after 1.441474351s: waiting for machine to come up
	I0914 22:46:48.813921   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:48.814415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:48.814447   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:48.814362   47160 retry.go:31] will retry after 1.497562998s: waiting for machine to come up
	I0914 22:46:50.313674   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:50.314191   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:50.314221   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:50.314137   47160 retry.go:31] will retry after 1.620308161s: waiting for machine to come up
	I0914 22:46:48.418825   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:46:48.456715   45954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:46:48.496982   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:46:48.515172   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:46:48.515209   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:46:48.515223   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:46:48.515234   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:46:48.515247   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:46:48.515261   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:46:48.515272   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:46:48.515285   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:46:48.515295   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:46:48.515307   45954 system_pods.go:74] duration metric: took 18.305048ms to wait for pod list to return data ...
	I0914 22:46:48.515320   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:46:48.518842   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:46:48.518875   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:46:48.518888   45954 node_conditions.go:105] duration metric: took 3.562448ms to run NodePressure ...
	I0914 22:46:48.518908   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:50.951051   45954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.432118027s)
	I0914 22:46:50.951087   45954 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959708   45954 kubeadm.go:787] kubelet initialised
	I0914 22:46:50.959735   45954 kubeadm.go:788] duration metric: took 8.637125ms waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959745   45954 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:50.966214   45954 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.975076   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975106   45954 pod_ready.go:81] duration metric: took 8.863218ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.975118   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975129   45954 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.982438   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982471   45954 pod_ready.go:81] duration metric: took 7.330437ms waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.982485   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982493   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.991067   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991102   45954 pod_ready.go:81] duration metric: took 8.574268ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.991115   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991125   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.006696   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006732   45954 pod_ready.go:81] duration metric: took 15.595604ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.006745   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006755   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.354645   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354678   45954 pod_ready.go:81] duration metric: took 347.913938ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.354690   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354702   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.754959   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.754998   45954 pod_ready.go:81] duration metric: took 400.283619ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.755012   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.755022   45954 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:52.156253   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156299   45954 pod_ready.go:81] duration metric: took 401.260791ms waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:52.156314   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156327   45954 pod_ready.go:38] duration metric: took 1.196571114s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:52.156352   45954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:46:52.169026   45954 ops.go:34] apiserver oom_adj: -16
	I0914 22:46:52.169049   45954 kubeadm.go:640] restartCluster took 23.325317121s
	I0914 22:46:52.169059   45954 kubeadm.go:406] StartCluster complete in 23.364799998s
	I0914 22:46:52.169079   45954 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.169161   45954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:46:52.171787   45954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.172077   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:46:52.172229   45954 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:46:52.172310   45954 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172332   45954 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-799144"
	I0914 22:46:52.172325   45954 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799144"
	W0914 22:46:52.172340   45954 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:46:52.172347   45954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799144"
	I0914 22:46:52.172351   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:52.172394   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.172394   45954 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172424   45954 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.172436   45954 addons.go:240] addon metrics-server should already be in state true
	I0914 22:46:52.172500   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.173205   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173252   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173383   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173451   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173744   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173822   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.178174   45954 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-799144" context rescaled to 1 replicas
	I0914 22:46:52.178208   45954 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:46:52.180577   45954 out.go:177] * Verifying Kubernetes components...
	I0914 22:46:52.182015   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:46:52.194030   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0914 22:46:52.194040   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38817
	I0914 22:46:52.194506   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.194767   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.195059   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195078   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195219   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195235   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195420   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.195642   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.195715   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.196346   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.196392   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.198560   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0914 22:46:52.199130   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.199612   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.199641   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.199995   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.200530   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.200575   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.206536   45954 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.206558   45954 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:46:52.206584   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.206941   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.206973   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.215857   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0914 22:46:52.216266   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.216801   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.216825   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.217297   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.217484   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.220211   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40683
	I0914 22:46:52.220740   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.221296   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.221314   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.221798   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.221986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.222185   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.224162   45954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:46:52.224261   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.225483   45954 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.225494   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:46:52.225511   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.225526   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0914 22:46:52.227067   45954 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:46:52.225976   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.228337   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:46:52.228354   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:46:52.228373   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.228750   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.228764   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.228959   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229601   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.229674   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.229702   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229908   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.230068   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.230171   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.230203   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.230280   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.230503   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.232673   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233097   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.233153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.233536   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.233684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.233821   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.251500   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43473
	I0914 22:46:52.252069   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.252702   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.252722   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.253171   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.253419   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.255233   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.255574   45954 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.255591   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:46:52.255609   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.258620   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.259178   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259379   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.259584   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.259754   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.259961   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.350515   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.367291   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:46:52.367309   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:46:52.413141   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:46:52.413170   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:46:52.419647   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.462672   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:52.462698   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:46:52.519331   45954 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:46:52.519330   45954 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:52.530851   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:53.719523   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368967292s)
	I0914 22:46:53.719575   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719582   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299890259s)
	I0914 22:46:53.719616   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719638   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.719589   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720079   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720083   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720097   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720101   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720107   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720111   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720121   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720080   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720404   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720414   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720425   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720501   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720525   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720538   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720553   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720804   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720822   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.721724   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.190817165s)
	I0914 22:46:53.721771   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.721784   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.722084   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.722100   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.722089   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.722115   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.722128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.723592   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.723602   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.723614   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.723631   45954 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-799144"
	I0914 22:46:53.725666   45954 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:46:48.840421   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.179960   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.180026   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.185490   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:49.194744   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:49.205937   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210532   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210582   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.215917   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:49.225393   46412 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:49.229604   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:49.234795   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:49.239907   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:49.245153   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:49.250558   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:49.256142   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:49.261518   46412 kubeadm.go:404] StartCluster: {Name:embed-certs-588699 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:49.261618   46412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:49.261687   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:49.291460   46412 cri.go:89] found id: ""
	I0914 22:46:49.291560   46412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:49.300496   46412 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:49.300558   46412 kubeadm.go:636] restartCluster start
	I0914 22:46:49.300616   46412 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:49.309827   46412 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.311012   46412 kubeconfig.go:92] found "embed-certs-588699" server: "https://192.168.61.205:8443"
	I0914 22:46:49.313336   46412 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:49.321470   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.321528   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.332257   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.332275   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.332320   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.345427   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.846146   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.846240   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.859038   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.345492   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.345583   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.358070   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.845544   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.845605   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.861143   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.345602   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.345675   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.357406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.845964   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.846082   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.860079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.346093   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.346159   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.360952   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.845612   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.845717   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.860504   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:53.345991   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.360947   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.936297   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:51.936809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:51.936840   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:51.936747   47160 retry.go:31] will retry after 2.284330296s: waiting for machine to come up
	I0914 22:46:54.222960   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:54.223478   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:54.223530   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:54.223417   47160 retry.go:31] will retry after 3.537695113s: waiting for machine to come up
	I0914 22:46:53.726984   45954 addons.go:502] enable addons completed in 1.554762762s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:46:54.641725   45954 node_ready.go:58] node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:57.141217   45954 node_ready.go:49] node "default-k8s-diff-port-799144" has status "Ready":"True"
	I0914 22:46:57.141240   45954 node_ready.go:38] duration metric: took 4.621872993s waiting for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:57.141250   45954 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:57.151019   45954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162159   45954 pod_ready.go:92] pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace has status "Ready":"True"
	I0914 22:46:57.162180   45954 pod_ready.go:81] duration metric: took 11.133949ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162189   45954 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:53.845734   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.845815   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.858406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.346078   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.346138   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.360079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.845738   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.845801   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.861945   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.346533   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.346627   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.360445   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.845577   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.845681   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.856800   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.346374   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.346461   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.357724   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.846264   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.846376   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.857963   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.346006   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.357336   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.845877   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.845944   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.857310   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:58.345855   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.345925   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.357766   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.762315   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:57.762689   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:57.762714   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:57.762651   47160 retry.go:31] will retry after 3.773493672s: waiting for machine to come up
	I0914 22:46:59.185077   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:01.185320   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:02.912525   45407 start.go:369] acquired machines lock for "no-preload-344363" in 55.358672707s
	I0914 22:47:02.912580   45407 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:47:02.912592   45407 fix.go:54] fixHost starting: 
	I0914 22:47:02.913002   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:47:02.913035   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:47:02.932998   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0914 22:47:02.933535   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:47:02.933956   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:47:02.933977   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:47:02.934303   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:47:02.934484   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:02.934627   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:47:02.936412   45407 fix.go:102] recreateIfNeeded on no-preload-344363: state=Stopped err=<nil>
	I0914 22:47:02.936438   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	W0914 22:47:02.936601   45407 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:47:02.938235   45407 out.go:177] * Restarting existing kvm2 VM for "no-preload-344363" ...
	I0914 22:46:58.845728   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.845806   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.859436   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:59.322167   46412 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:59.322206   46412 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:59.322218   46412 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:59.322278   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:59.352268   46412 cri.go:89] found id: ""
	I0914 22:46:59.352371   46412 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:59.366742   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:59.374537   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:59.374598   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382227   46412 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382251   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:59.486171   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.268311   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.462362   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.528925   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.601616   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:00.601697   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:00.623311   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.140972   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.640574   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.141044   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.640374   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.140881   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.166662   46412 api_server.go:72] duration metric: took 2.565044214s to wait for apiserver process to appear ...
	I0914 22:47:03.166688   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:03.166703   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:01.540578   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541058   46713 main.go:141] libmachine: (old-k8s-version-930717) Found IP for machine: 192.168.72.70
	I0914 22:47:01.541095   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has current primary IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541106   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserving static IP address...
	I0914 22:47:01.541552   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserved static IP address: 192.168.72.70
	I0914 22:47:01.541579   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting for SSH to be available...
	I0914 22:47:01.541613   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.541646   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | skip adding static IP to network mk-old-k8s-version-930717 - found existing host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"}
	I0914 22:47:01.541672   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Getting to WaitForSSH function...
	I0914 22:47:01.543898   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544285   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.544317   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544428   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH client type: external
	I0914 22:47:01.544451   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa (-rw-------)
	I0914 22:47:01.544499   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:01.544518   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | About to run SSH command:
	I0914 22:47:01.544552   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | exit 0
	I0914 22:47:01.639336   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:01.639694   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetConfigRaw
	I0914 22:47:01.640324   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.642979   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643345   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.643389   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643643   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:47:01.643833   46713 machine.go:88] provisioning docker machine ...
	I0914 22:47:01.643855   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:01.644085   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644249   46713 buildroot.go:166] provisioning hostname "old-k8s-version-930717"
	I0914 22:47:01.644272   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644434   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.646429   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.646771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.646819   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.647008   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.647209   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647360   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647536   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.647737   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.648245   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.648270   46713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-930717 && echo "old-k8s-version-930717" | sudo tee /etc/hostname
	I0914 22:47:01.789438   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-930717
	
	I0914 22:47:01.789472   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.792828   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793229   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.793277   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793459   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.793644   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793778   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793953   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.794120   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.794459   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.794478   46713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-930717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-930717/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-930717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:01.928496   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:01.928536   46713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:01.928567   46713 buildroot.go:174] setting up certificates
	I0914 22:47:01.928586   46713 provision.go:83] configureAuth start
	I0914 22:47:01.928609   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.928914   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.931976   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932368   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.932398   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932542   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.934939   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935311   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.935344   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935480   46713 provision.go:138] copyHostCerts
	I0914 22:47:01.935537   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:01.935548   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:01.935620   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:01.935775   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:01.935789   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:01.935824   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:01.935970   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:01.935981   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:01.936010   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:01.936086   46713 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-930717 san=[192.168.72.70 192.168.72.70 localhost 127.0.0.1 minikube old-k8s-version-930717]
	I0914 22:47:02.167446   46713 provision.go:172] copyRemoteCerts
	I0914 22:47:02.167510   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:02.167534   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.170442   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.170862   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.170900   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.171089   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.171302   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.171496   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.171645   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.267051   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:02.289098   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 22:47:02.312189   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:02.334319   46713 provision.go:86] duration metric: configureAuth took 405.716896ms
	I0914 22:47:02.334346   46713 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:02.334555   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:47:02.334638   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.337255   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337605   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.337637   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.337949   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338100   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338240   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.338384   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.338859   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.338890   46713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:02.654307   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:02.654332   46713 machine.go:91] provisioned docker machine in 1.010485195s
	I0914 22:47:02.654345   46713 start.go:300] post-start starting for "old-k8s-version-930717" (driver="kvm2")
	I0914 22:47:02.654358   46713 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:02.654382   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.654747   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:02.654782   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.657773   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658153   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.658182   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658425   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.658630   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.658812   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.659001   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.750387   46713 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:02.754444   46713 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:02.754468   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:02.754545   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:02.754654   46713 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:02.754762   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:02.765781   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:02.788047   46713 start.go:303] post-start completed in 133.686385ms
	I0914 22:47:02.788072   46713 fix.go:56] fixHost completed within 20.927830884s
	I0914 22:47:02.788098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.791051   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791408   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.791441   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791628   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.791840   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792041   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792215   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.792383   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.792817   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.792836   46713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:02.912359   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731622.856601606
	
	I0914 22:47:02.912381   46713 fix.go:206] guest clock: 1694731622.856601606
	I0914 22:47:02.912391   46713 fix.go:219] Guest: 2023-09-14 22:47:02.856601606 +0000 UTC Remote: 2023-09-14 22:47:02.788077838 +0000 UTC m=+102.306332554 (delta=68.523768ms)
	I0914 22:47:02.912413   46713 fix.go:190] guest clock delta is within tolerance: 68.523768ms
	I0914 22:47:02.912424   46713 start.go:83] releasing machines lock for "old-k8s-version-930717", held for 21.052207532s
	I0914 22:47:02.912457   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.912730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:02.915769   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916200   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.916265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916453   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917073   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917245   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917352   46713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:02.917397   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.917535   46713 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:02.917563   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.920256   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920363   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920656   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920695   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920724   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920744   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920959   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921261   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921282   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921431   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921489   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921567   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.921635   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:03.014070   46713 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:03.047877   46713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:03.192347   46713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:03.200249   46713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:03.200324   46713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:03.215110   46713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:03.215138   46713 start.go:469] detecting cgroup driver to use...
	I0914 22:47:03.215201   46713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:03.228736   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:03.241326   46713 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:03.241377   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:03.253001   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:03.264573   46713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:03.371107   46713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:03.512481   46713 docker.go:212] disabling docker service ...
	I0914 22:47:03.512554   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:03.526054   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:03.537583   46713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:03.662087   46713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:03.793448   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:03.807574   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:03.828240   46713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0914 22:47:03.828311   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.842435   46713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:03.842490   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.856199   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.867448   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.878222   46713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:03.891806   46713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:03.899686   46713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:03.899740   46713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:03.912584   46713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:03.920771   46713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:04.040861   46713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:04.230077   46713 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:04.230147   46713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:04.235664   46713 start.go:537] Will wait 60s for crictl version
	I0914 22:47:04.235726   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:04.239737   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:04.279680   46713 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:04.279755   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.329363   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.389025   46713 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0914 22:47:02.939505   45407 main.go:141] libmachine: (no-preload-344363) Calling .Start
	I0914 22:47:02.939701   45407 main.go:141] libmachine: (no-preload-344363) Ensuring networks are active...
	I0914 22:47:02.940415   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network default is active
	I0914 22:47:02.940832   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network mk-no-preload-344363 is active
	I0914 22:47:02.941287   45407 main.go:141] libmachine: (no-preload-344363) Getting domain xml...
	I0914 22:47:02.942103   45407 main.go:141] libmachine: (no-preload-344363) Creating domain...
	I0914 22:47:04.410207   45407 main.go:141] libmachine: (no-preload-344363) Waiting to get IP...
	I0914 22:47:04.411192   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.411669   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.411744   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.411647   47373 retry.go:31] will retry after 198.435142ms: waiting for machine to come up
	I0914 22:47:04.612435   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.612957   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.613025   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.612934   47373 retry.go:31] will retry after 350.950211ms: waiting for machine to come up
	I0914 22:47:04.965570   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.966332   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.966458   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.966377   47373 retry.go:31] will retry after 398.454996ms: waiting for machine to come up
	I0914 22:47:04.390295   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:04.393815   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394249   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:04.394282   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394543   46713 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:04.398850   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:04.411297   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:47:04.411363   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:04.443950   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:04.444023   46713 ssh_runner.go:195] Run: which lz4
	I0914 22:47:04.448422   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:47:04.453479   46713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:47:04.453505   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0914 22:47:03.686086   45954 pod_ready.go:92] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.686112   45954 pod_ready.go:81] duration metric: took 6.523915685s waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.686125   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692434   45954 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.692454   45954 pod_ready.go:81] duration metric: took 6.320818ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692466   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698065   45954 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.698088   45954 pod_ready.go:81] duration metric: took 5.613243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698100   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703688   45954 pod_ready.go:92] pod "kube-proxy-j2qmv" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.703706   45954 pod_ready.go:81] duration metric: took 5.599421ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703718   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708487   45954 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.708505   45954 pod_ready.go:81] duration metric: took 4.779322ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708516   45954 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:05.993620   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:07.475579   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.475617   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:07.475631   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:07.531335   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.531366   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:08.032057   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.039350   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.039384   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:08.531559   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.538857   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.538891   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:09.031899   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:09.037891   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:47:09.047398   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:47:09.047426   46412 api_server.go:131] duration metric: took 5.880732639s to wait for apiserver health ...
	I0914 22:47:09.047434   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:47:09.047440   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:09.049137   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:05.366070   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.366812   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.366844   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.366740   47373 retry.go:31] will retry after 471.857141ms: waiting for machine to come up
	I0914 22:47:05.840519   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.841198   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.841229   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.841150   47373 retry.go:31] will retry after 632.189193ms: waiting for machine to come up
	I0914 22:47:06.475175   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:06.475769   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:06.475800   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:06.475704   47373 retry.go:31] will retry after 866.407813ms: waiting for machine to come up
	I0914 22:47:07.344343   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:07.344865   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:07.344897   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:07.344815   47373 retry.go:31] will retry after 1.101301607s: waiting for machine to come up
	I0914 22:47:08.448452   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:08.449070   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:08.449111   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:08.449014   47373 retry.go:31] will retry after 995.314765ms: waiting for machine to come up
	I0914 22:47:09.446294   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:09.446708   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:09.446740   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:09.446653   47373 retry.go:31] will retry after 1.180552008s: waiting for machine to come up
	I0914 22:47:05.984485   46713 crio.go:444] Took 1.536109 seconds to copy over tarball
	I0914 22:47:05.984562   46713 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:47:09.247825   46713 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.263230608s)
	I0914 22:47:09.247858   46713 crio.go:451] Took 3.263345 seconds to extract the tarball
	I0914 22:47:09.247871   46713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:47:09.289821   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:09.340429   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:09.340463   46713 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:09.340544   46713 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.340568   46713 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0914 22:47:09.340535   46713 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.340531   46713 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.340789   46713 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.340811   46713 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.340886   46713 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.340793   46713 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.342655   46713 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.342658   46713 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.342636   46713 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.342635   46713 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.342793   46713 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.561063   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0914 22:47:09.564079   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.564246   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.564957   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.566014   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.571757   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.578469   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.687502   46713 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0914 22:47:09.687548   46713 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0914 22:47:09.687591   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.727036   46713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0914 22:47:09.727085   46713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.727140   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0914 22:47:09.737952   46713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0914 22:47:09.737986   46713 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.737990   46713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0914 22:47:09.738002   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738013   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738023   46713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.738063   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.744728   46713 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0914 22:47:09.744768   46713 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.744813   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753014   46713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0914 22:47:09.753055   46713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.753080   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753104   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.753056   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0914 22:47:09.753149   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.753193   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.753213   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.758372   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.758544   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.875271   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0914 22:47:09.875299   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0914 22:47:09.875357   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0914 22:47:09.875382   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.875404   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0914 22:47:09.876393   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0914 22:47:09.878339   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0914 22:47:09.878491   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0914 22:47:09.881457   46713 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0914 22:47:09.881475   46713 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.881521   46713 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0914 22:47:08.496805   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.993044   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:09.050966   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:09.061912   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:09.096783   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:09.111938   46412 system_pods.go:59] 8 kube-system pods found
	I0914 22:47:09.111976   46412 system_pods.go:61] "coredns-5dd5756b68-zrd8r" [5b5f18a0-d6ee-42f2-b31a-4f8555b50388] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:09.111988   46412 system_pods.go:61] "etcd-embed-certs-588699" [b32d61b5-8c3f-4980-9f0f-c08630be9c36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:47:09.112001   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [58ac976e-7a8c-4aee-9ee5-b92bd7e897b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:47:09.112015   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [3f9587f5-fe32-446a-a4c9-cb679b177937] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:47:09.112036   46412 system_pods.go:61] "kube-proxy-l8pq9" [4aecae33-dcd9-4ec6-a537-ecbb076c44d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:47:09.112052   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [f23ab185-f4c2-4e39-936d-51d51538b0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:47:09.112066   46412 system_pods.go:61] "metrics-server-57f55c9bc5-zvk82" [3c48277c-4604-4a83-82ea-2776cf0d0537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:47:09.112077   46412 system_pods.go:61] "storage-provisioner" [f0acbbe1-c326-4863-ae2e-d2d3e5be07c1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:47:09.112090   46412 system_pods.go:74] duration metric: took 15.280254ms to wait for pod list to return data ...
	I0914 22:47:09.112103   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:09.119686   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:09.119725   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:09.119747   46412 node_conditions.go:105] duration metric: took 7.637688ms to run NodePressure ...
	I0914 22:47:09.119768   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:09.407351   46412 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414338   46412 kubeadm.go:787] kubelet initialised
	I0914 22:47:09.414361   46412 kubeadm.go:788] duration metric: took 6.974234ms waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414369   46412 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:47:09.424482   46412 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:12.171133   46412 pod_ready.go:102] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.628919   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:10.629418   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:10.629449   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:10.629366   47373 retry.go:31] will retry after 1.486310454s: waiting for machine to come up
	I0914 22:47:12.117762   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:12.118350   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:12.118381   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:12.118295   47373 retry.go:31] will retry after 2.678402115s: waiting for machine to come up
	I0914 22:47:14.798599   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:14.799127   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:14.799160   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:14.799060   47373 retry.go:31] will retry after 2.724185493s: waiting for machine to come up
	I0914 22:47:10.647242   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:12.244764   46713 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.363213143s)
	I0914 22:47:12.244798   46713 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0914 22:47:12.244823   46713 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.013457524s)
	I0914 22:47:12.244888   46713 cache_images.go:92] LoadImages completed in 2.904411161s
	W0914 22:47:12.244978   46713 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0914 22:47:12.245070   46713 ssh_runner.go:195] Run: crio config
	I0914 22:47:12.328636   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:12.328663   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:12.328687   46713 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:12.328710   46713 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-930717 NodeName:old-k8s-version-930717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 22:47:12.328882   46713 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-930717"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-930717
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:12.328984   46713 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-930717 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:12.329062   46713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0914 22:47:12.339084   46713 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:12.339169   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:12.348354   46713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 22:47:12.369083   46713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:12.388242   46713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0914 22:47:12.407261   46713 ssh_runner.go:195] Run: grep 192.168.72.70	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:12.411055   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:12.425034   46713 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717 for IP: 192.168.72.70
	I0914 22:47:12.425070   46713 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:12.425236   46713 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:12.425283   46713 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:12.425372   46713 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.key
	I0914 22:47:12.425451   46713 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key.382dacf3
	I0914 22:47:12.425512   46713 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key
	I0914 22:47:12.425642   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:12.425671   46713 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:12.425685   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:12.425708   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:12.425732   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:12.425751   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:12.425789   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:12.426339   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:12.456306   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:47:12.486038   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:12.520941   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:47:12.552007   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:12.589620   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:12.619358   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:12.650395   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:12.678898   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:12.704668   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:12.730499   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:12.755286   46713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:12.773801   46713 ssh_runner.go:195] Run: openssl version
	I0914 22:47:12.781147   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:12.793953   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799864   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799922   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.806881   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:12.817936   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:12.830758   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836538   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836613   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.843368   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:12.855592   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:12.866207   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871317   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871368   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.878438   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:12.891012   46713 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:12.895887   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:12.902284   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:12.909482   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:12.916524   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:12.924045   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:12.929935   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:12.937292   46713 kubeadm.go:404] StartCluster: {Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:12.937417   46713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:12.937470   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:12.975807   46713 cri.go:89] found id: ""
	I0914 22:47:12.975902   46713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:12.988356   46713 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:12.988379   46713 kubeadm.go:636] restartCluster start
	I0914 22:47:12.988434   46713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:13.000294   46713 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.001492   46713 kubeconfig.go:92] found "old-k8s-version-930717" server: "https://192.168.72.70:8443"
	I0914 22:47:13.008583   46713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:13.023004   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.023065   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.037604   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.037625   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.037671   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.048939   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.549653   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.549746   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.561983   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.049481   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.049588   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.064694   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.549101   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.549195   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.564858   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:15.049112   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.049206   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.063428   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:12.993654   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:14.995358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:13.946979   46412 pod_ready.go:92] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:13.947004   46412 pod_ready.go:81] duration metric: took 4.522495708s waiting for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:13.947013   46412 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:15.968061   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:18.465595   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:17.526472   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:17.526915   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:17.526946   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:17.526867   47373 retry.go:31] will retry after 3.587907236s: waiting for machine to come up
	I0914 22:47:15.549179   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.549273   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.561977   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.049593   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.049678   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.063654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.549178   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.549248   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.561922   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.049041   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.049131   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.062442   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.550005   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.550066   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.561254   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.049855   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.049932   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.062226   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.549845   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.549941   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.561219   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.049739   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.049829   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.061225   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.550035   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.550112   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.561546   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:20.049979   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.050080   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.061478   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.489830   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:19.490802   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.490931   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.118871   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119369   45407 main.go:141] libmachine: (no-preload-344363) Found IP for machine: 192.168.39.60
	I0914 22:47:21.119391   45407 main.go:141] libmachine: (no-preload-344363) Reserving static IP address...
	I0914 22:47:21.119418   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has current primary IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119860   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.119888   45407 main.go:141] libmachine: (no-preload-344363) Reserved static IP address: 192.168.39.60
	I0914 22:47:21.119906   45407 main.go:141] libmachine: (no-preload-344363) DBG | skip adding static IP to network mk-no-preload-344363 - found existing host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"}
	I0914 22:47:21.119931   45407 main.go:141] libmachine: (no-preload-344363) DBG | Getting to WaitForSSH function...
	I0914 22:47:21.119949   45407 main.go:141] libmachine: (no-preload-344363) Waiting for SSH to be available...
	I0914 22:47:21.121965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122282   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.122312   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122392   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH client type: external
	I0914 22:47:21.122429   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa (-rw-------)
	I0914 22:47:21.122482   45407 main.go:141] libmachine: (no-preload-344363) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:21.122510   45407 main.go:141] libmachine: (no-preload-344363) DBG | About to run SSH command:
	I0914 22:47:21.122521   45407 main.go:141] libmachine: (no-preload-344363) DBG | exit 0
	I0914 22:47:21.206981   45407 main.go:141] libmachine: (no-preload-344363) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:21.207366   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetConfigRaw
	I0914 22:47:21.208066   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.210323   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210607   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.210639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210795   45407 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/config.json ...
	I0914 22:47:21.211016   45407 machine.go:88] provisioning docker machine ...
	I0914 22:47:21.211036   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:21.211258   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211431   45407 buildroot.go:166] provisioning hostname "no-preload-344363"
	I0914 22:47:21.211455   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211629   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.213574   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.213887   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.213921   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.214015   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.214181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214338   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.214648   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.215041   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.215056   45407 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-344363 && echo "no-preload-344363" | sudo tee /etc/hostname
	I0914 22:47:21.347323   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-344363
	
	I0914 22:47:21.347358   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.350445   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.350846   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.350882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.351144   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.351393   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351599   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351766   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.351944   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.352264   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.352291   45407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-344363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-344363/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-344363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:21.471619   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:21.471648   45407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:21.471671   45407 buildroot.go:174] setting up certificates
	I0914 22:47:21.471683   45407 provision.go:83] configureAuth start
	I0914 22:47:21.471696   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.472019   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.474639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475113   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.475141   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475293   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.477627   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.477976   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.478009   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.478148   45407 provision.go:138] copyHostCerts
	I0914 22:47:21.478189   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:21.478198   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:21.478249   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:21.478336   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:21.478344   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:21.478362   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:21.478416   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:21.478423   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:21.478439   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:21.478482   45407 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.no-preload-344363 san=[192.168.39.60 192.168.39.60 localhost 127.0.0.1 minikube no-preload-344363]
	I0914 22:47:21.546956   45407 provision.go:172] copyRemoteCerts
	I0914 22:47:21.547006   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:21.547029   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.549773   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550217   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.550257   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550468   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.550683   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.550850   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.551050   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:21.635939   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:21.656944   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:47:21.679064   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:21.701127   45407 provision.go:86] duration metric: configureAuth took 229.434247ms
	I0914 22:47:21.701147   45407 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:21.701319   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:47:21.701381   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.704100   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704475   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.704512   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704672   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.704865   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705046   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705218   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.705382   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.705828   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.705849   45407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:22.037291   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:22.037337   45407 machine.go:91] provisioned docker machine in 826.295956ms
	I0914 22:47:22.037350   45407 start.go:300] post-start starting for "no-preload-344363" (driver="kvm2")
	I0914 22:47:22.037363   45407 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:22.037396   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.037704   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:22.037729   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.040372   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040729   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.040757   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040896   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.041082   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.041266   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.041373   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.129612   45407 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:22.133522   45407 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:22.133550   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:22.133625   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:22.133715   45407 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:22.133844   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:22.142411   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:22.165470   45407 start.go:303] post-start completed in 128.106418ms
	I0914 22:47:22.165496   45407 fix.go:56] fixHost completed within 19.252903923s
	I0914 22:47:22.165524   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.168403   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168696   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.168731   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168894   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.169095   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169248   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169384   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.169571   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:22.169891   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:22.169904   45407 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:22.284038   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731642.258576336
	
	I0914 22:47:22.284062   45407 fix.go:206] guest clock: 1694731642.258576336
	I0914 22:47:22.284071   45407 fix.go:219] Guest: 2023-09-14 22:47:22.258576336 +0000 UTC Remote: 2023-09-14 22:47:22.16550191 +0000 UTC m=+357.203571663 (delta=93.074426ms)
	I0914 22:47:22.284107   45407 fix.go:190] guest clock delta is within tolerance: 93.074426ms
	I0914 22:47:22.284117   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 19.371563772s
	I0914 22:47:22.284146   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.284388   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:22.286809   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287091   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.287133   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287288   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287782   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287978   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.288050   45407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:22.288085   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.288176   45407 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:22.288197   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.290608   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.290936   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.290965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291067   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291157   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291345   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291516   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.291529   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.291554   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291649   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.291706   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291837   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291975   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.292158   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.417570   45407 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:22.423145   45407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:22.563752   45407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:22.569625   45407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:22.569718   45407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:22.585504   45407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:22.585527   45407 start.go:469] detecting cgroup driver to use...
	I0914 22:47:22.585610   45407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:22.599600   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:22.612039   45407 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:22.612080   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:22.624817   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:22.637141   45407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:22.744181   45407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:22.864420   45407 docker.go:212] disabling docker service ...
	I0914 22:47:22.864490   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:22.877360   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:22.888786   45407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:23.000914   45407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:23.137575   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:23.150682   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:23.167898   45407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:47:23.167966   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.176916   45407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:23.176991   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.185751   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.195260   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.204852   45407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:23.214303   45407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:23.222654   45407 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:23.222717   45407 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:23.235654   45407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:23.244081   45407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:23.357943   45407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:23.521315   45407 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:23.521410   45407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:23.526834   45407 start.go:537] Will wait 60s for crictl version
	I0914 22:47:23.526889   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:23.530250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:23.562270   45407 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:23.562358   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.606666   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.658460   45407 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:47:20.467600   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:20.964310   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.964331   46412 pod_ready.go:81] duration metric: took 7.017312906s waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.964349   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968539   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.968555   46412 pod_ready.go:81] duration metric: took 4.200242ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968563   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973180   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.973194   46412 pod_ready.go:81] duration metric: took 4.625123ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973206   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977403   46412 pod_ready.go:92] pod "kube-proxy-l8pq9" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.977418   46412 pod_ready.go:81] duration metric: took 4.206831ms waiting for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977425   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375236   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:22.375259   46412 pod_ready.go:81] duration metric: took 1.397826525s waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375271   46412 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:23.659885   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:23.662745   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663195   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:23.663228   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663452   45407 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:23.667637   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:23.678881   45407 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:47:23.678929   45407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:23.708267   45407 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:47:23.708309   45407 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:23.708390   45407 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.708421   45407 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.708424   45407 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0914 22:47:23.708437   45407 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.708425   45407 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.708537   45407 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.708403   45407 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.708393   45407 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.709903   45407 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.709887   45407 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.709899   45407 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.710189   45407 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.710260   45407 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0914 22:47:23.710346   45407 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.917134   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.929080   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.929396   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0914 22:47:23.935684   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.936236   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.937239   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.937622   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.006429   45407 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0914 22:47:24.006479   45407 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.006524   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.102547   45407 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0914 22:47:24.102597   45407 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.102641   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201012   45407 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0914 22:47:24.201050   45407 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.201100   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201106   45407 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0914 22:47:24.201138   45407 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.201156   45407 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0914 22:47:24.201203   45407 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.201227   45407 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0914 22:47:24.201282   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.201294   45407 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.201329   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201236   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201180   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.206295   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.263389   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0914 22:47:24.263451   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.263501   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.263513   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:24.263534   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0914 22:47:24.263573   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.263665   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.273844   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0914 22:47:24.273932   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:24.338823   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0914 22:47:24.338944   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:24.344560   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0914 22:47:24.344580   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0914 22:47:24.344594   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344635   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344659   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:24.344678   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0914 22:47:24.344723   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0914 22:47:24.344745   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0914 22:47:24.344816   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:24.346975   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0914 22:47:24.953835   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:20.549479   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.549585   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.563121   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.049732   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.049807   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.061447   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.549012   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.549073   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.561653   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.049517   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.049582   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.062280   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.549943   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.550017   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.562654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:23.024019   46713 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:23.024043   46713 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:23.024054   46713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:23.024101   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:23.060059   46713 cri.go:89] found id: ""
	I0914 22:47:23.060116   46713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:23.078480   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:23.087665   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:23.087714   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096513   46713 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096535   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:23.205072   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.081881   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.285041   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.364758   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.468127   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:24.468201   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:24.483354   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.007133   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.507231   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:23.992945   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.492600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:24.475872   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.978889   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.317110   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.97244294s)
	I0914 22:47:26.317145   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0914 22:47:26.317167   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317174   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.972489589s)
	I0914 22:47:26.317202   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0914 22:47:26.317215   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317248   45407 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.363386448s)
	I0914 22:47:26.317281   45407 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 22:47:26.317319   45407 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.317366   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:26.317213   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.972376756s)
	I0914 22:47:26.317426   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0914 22:47:28.397989   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (2.080744487s)
	I0914 22:47:28.398021   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0914 22:47:28.398031   45407 ssh_runner.go:235] Completed: which crictl: (2.080647539s)
	I0914 22:47:28.398048   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398093   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398095   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.006554   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:26.032232   46713 api_server.go:72] duration metric: took 1.564104415s to wait for apiserver process to appear ...
	I0914 22:47:26.032255   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:26.032270   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:28.992292   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.490442   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.033000   46713 api_server.go:269] stopped: https://192.168.72.70:8443/healthz: Get "https://192.168.72.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 22:47:31.033044   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:31.568908   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:31.568937   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:32.069915   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.080424   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.080456   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:32.570110   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.580879   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.580918   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:33.069247   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:33.077664   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:47:33.086933   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:47:33.086960   46713 api_server.go:131] duration metric: took 7.054699415s to wait for apiserver health ...
	I0914 22:47:33.086973   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:33.086981   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:33.088794   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:29.476304   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.975459   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:30.974281   45407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.57612291s)
	I0914 22:47:30.974347   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 22:47:30.974381   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.576263058s)
	I0914 22:47:30.974403   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0914 22:47:30.974427   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:30.974455   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:30.974470   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:33.737309   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.762815322s)
	I0914 22:47:33.737355   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0914 22:47:33.737379   45407 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.737322   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.762844826s)
	I0914 22:47:33.737464   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 22:47:33.737436   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.090357   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:33.103371   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:33.123072   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:33.133238   46713 system_pods.go:59] 7 kube-system pods found
	I0914 22:47:33.133268   46713 system_pods.go:61] "coredns-5644d7b6d9-8sbjk" [638464d2-96db-460d-bf82-0ee79df816da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:33.133278   46713 system_pods.go:61] "etcd-old-k8s-version-930717" [4b38f48a-fc4a-43d5-a2b4-414aff712c1b] Running
	I0914 22:47:33.133286   46713 system_pods.go:61] "kube-apiserver-old-k8s-version-930717" [523a3adc-8c68-4980-8a53-133476ce2488] Running
	I0914 22:47:33.133294   46713 system_pods.go:61] "kube-controller-manager-old-k8s-version-930717" [36fd7e01-4a5d-446f-8370-f7a7e886571c] Running
	I0914 22:47:33.133306   46713 system_pods.go:61] "kube-proxy-l4qz4" [c61d0471-0a9e-4662-b723-39944c8b3c31] Running
	I0914 22:47:33.133314   46713 system_pods.go:61] "kube-scheduler-old-k8s-version-930717" [f6d45807-c7f2-4545-b732-45dbd945c660] Running
	I0914 22:47:33.133323   46713 system_pods.go:61] "storage-provisioner" [2956bea1-80f8-4f61-a635-4332d4e3042e] Running
	I0914 22:47:33.133331   46713 system_pods.go:74] duration metric: took 10.233824ms to wait for pod list to return data ...
	I0914 22:47:33.133343   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:33.137733   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:33.137765   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:33.137776   46713 node_conditions.go:105] duration metric: took 4.42667ms to run NodePressure ...
	I0914 22:47:33.137795   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:33.590921   46713 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:33.597720   46713 retry.go:31] will retry after 159.399424ms: kubelet not initialised
	I0914 22:47:33.767747   46713 retry.go:31] will retry after 191.717885ms: kubelet not initialised
	I0914 22:47:33.967120   46713 retry.go:31] will retry after 382.121852ms: kubelet not initialised
	I0914 22:47:34.354106   46713 retry.go:31] will retry after 1.055800568s: kubelet not initialised
	I0914 22:47:35.413704   46713 retry.go:31] will retry after 1.341728619s: kubelet not initialised
	I0914 22:47:33.993188   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.491280   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:34.475254   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.977175   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.760804   46713 retry.go:31] will retry after 2.668611083s: kubelet not initialised
	I0914 22:47:39.434688   46713 retry.go:31] will retry after 2.1019007s: kubelet not initialised
	I0914 22:47:38.994051   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.490913   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:38.998980   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.474686   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:40.530763   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.793268381s)
	I0914 22:47:40.530793   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0914 22:47:40.530820   45407 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:40.530881   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:41.888277   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.357355595s)
	I0914 22:47:41.888305   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0914 22:47:41.888338   45407 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:41.888405   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:42.537191   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 22:47:42.537244   45407 cache_images.go:123] Successfully loaded all cached images
	I0914 22:47:42.537251   45407 cache_images.go:92] LoadImages completed in 18.828927203s
	I0914 22:47:42.537344   45407 ssh_runner.go:195] Run: crio config
	I0914 22:47:42.594035   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:47:42.594056   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:42.594075   45407 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:42.594098   45407 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-344363 NodeName:no-preload-344363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:47:42.594272   45407 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-344363"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:42.594383   45407 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-344363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:42.594449   45407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:47:42.604172   45407 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:42.604243   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:42.612570   45407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0914 22:47:42.628203   45407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:42.643625   45407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0914 22:47:42.658843   45407 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:42.661922   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:42.672252   45407 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363 for IP: 192.168.39.60
	I0914 22:47:42.672279   45407 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:42.672420   45407 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:42.672462   45407 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:42.672536   45407 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.key
	I0914 22:47:42.672630   45407 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key.a014e791
	I0914 22:47:42.672693   45407 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key
	I0914 22:47:42.672828   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:42.672867   45407 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:42.672879   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:42.672915   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:42.672948   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:42.672982   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:42.673044   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:42.673593   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:42.695080   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:47:42.716844   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:42.746475   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0914 22:47:42.769289   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:42.790650   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:42.811665   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:42.833241   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:42.853851   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:42.875270   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:42.896913   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:42.917370   45407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:42.934549   45407 ssh_runner.go:195] Run: openssl version
	I0914 22:47:42.939762   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:42.949829   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954155   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954204   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.959317   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:42.968463   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:42.979023   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983436   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983502   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.988655   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:42.998288   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:43.007767   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011865   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011940   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.016837   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:43.026372   45407 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:43.030622   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:43.036026   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:43.041394   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:43.046608   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:43.051675   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:43.056621   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:43.061552   45407 kubeadm.go:404] StartCluster: {Name:no-preload-344363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:43.061645   45407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:43.061700   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:43.090894   45407 cri.go:89] found id: ""
	I0914 22:47:43.090957   45407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:43.100715   45407 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:43.100732   45407 kubeadm.go:636] restartCluster start
	I0914 22:47:43.100782   45407 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:43.109233   45407 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.110217   45407 kubeconfig.go:92] found "no-preload-344363" server: "https://192.168.39.60:8443"
	I0914 22:47:43.112442   45407 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:43.120580   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.120619   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.131224   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.131238   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.131292   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.140990   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.641661   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.641753   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.653379   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.142002   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.142077   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.154194   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.641806   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.641931   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.653795   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:41.541334   46713 retry.go:31] will retry after 2.553142131s: kubelet not initialised
	I0914 22:47:44.100647   46713 retry.go:31] will retry after 6.538244211s: kubelet not initialised
	I0914 22:47:43.995757   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.490438   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:43.974300   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.474137   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:45.141728   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.141816   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.153503   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:45.641693   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.641775   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.653204   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.141748   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.141838   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.153035   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.641294   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.641386   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.653144   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.141813   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.141915   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.152408   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.641793   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.641872   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.653228   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.141212   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.141304   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.152568   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.641805   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.641881   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.652184   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.141839   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.141909   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.152921   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.642082   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.642160   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.656837   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.991209   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:51.492672   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:48.973567   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.974964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:52.975525   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.141324   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.141399   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.153003   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:50.642032   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.642113   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.653830   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.141403   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.141486   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.152324   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.641932   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.642027   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.653279   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.141928   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.141998   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.152653   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.641151   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.641239   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.652312   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:53.121389   45407 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:53.121422   45407 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:53.121436   45407 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:53.121511   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:53.150615   45407 cri.go:89] found id: ""
	I0914 22:47:53.150681   45407 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:53.164511   45407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:53.173713   45407 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:53.173778   45407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183776   45407 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183797   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:53.310974   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.230246   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.409237   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.474183   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.572433   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:54.572581   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:54.584938   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:50.644922   46713 retry.go:31] will retry after 11.248631638s: kubelet not initialised
	I0914 22:47:53.990630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.990661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.475037   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:57.475941   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.098638   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:55.599218   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.099188   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.598826   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.621701   45407 api_server.go:72] duration metric: took 2.049267478s to wait for apiserver process to appear ...
	I0914 22:47:56.621729   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:56.621749   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622263   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:56.622301   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622682   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:57.123404   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.433050   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:48:00.433082   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:48:00.433096   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.467030   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.467073   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:00.623319   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.633882   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.633912   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.123559   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.128661   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.128691   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.623201   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.629775   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.629804   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:02.123439   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:02.131052   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:48:02.141185   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:48:02.141213   45407 api_server.go:131] duration metric: took 5.519473898s to wait for apiserver health ...
	I0914 22:48:02.141222   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:48:02.141228   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:48:02.143254   45407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:57.992038   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:59.992600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:02.144756   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:48:02.158230   45407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:48:02.182382   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:48:02.204733   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:48:02.204786   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:48:02.204801   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:48:02.204817   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:48:02.204834   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:48:02.204847   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:48:02.204859   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:48:02.204876   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:48:02.204887   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:48:02.204900   45407 system_pods.go:74] duration metric: took 22.491699ms to wait for pod list to return data ...
	I0914 22:48:02.204913   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:48:02.208661   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:48:02.208692   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:48:02.208706   45407 node_conditions.go:105] duration metric: took 3.7844ms to run NodePressure ...
	I0914 22:48:02.208731   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:48:02.454257   45407 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458848   45407 kubeadm.go:787] kubelet initialised
	I0914 22:48:02.458868   45407 kubeadm.go:788] duration metric: took 4.585034ms waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458874   45407 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:02.464634   45407 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.471350   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471371   45407 pod_ready.go:81] duration metric: took 6.714087ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.471379   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471387   45407 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.476977   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.476998   45407 pod_ready.go:81] duration metric: took 5.604627ms waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.477009   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.477019   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.483218   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483236   45407 pod_ready.go:81] duration metric: took 6.211697ms waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.483244   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483256   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.589184   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589217   45407 pod_ready.go:81] duration metric: took 105.950074ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.589227   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589236   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.987051   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987081   45407 pod_ready.go:81] duration metric: took 397.836385ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.987094   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987103   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.392835   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392865   45407 pod_ready.go:81] duration metric: took 405.754351ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.392876   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392886   45407 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.786615   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786641   45407 pod_ready.go:81] duration metric: took 393.746366ms waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.786652   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786660   45407 pod_ready.go:38] duration metric: took 1.327778716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:03.786676   45407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:48:03.798081   45407 ops.go:34] apiserver oom_adj: -16
	I0914 22:48:03.798101   45407 kubeadm.go:640] restartCluster took 20.697363165s
	I0914 22:48:03.798107   45407 kubeadm.go:406] StartCluster complete in 20.736562339s
	I0914 22:48:03.798121   45407 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.798193   45407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:48:03.799954   45407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.800200   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:48:03.800299   45407 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:48:03.800368   45407 addons.go:69] Setting storage-provisioner=true in profile "no-preload-344363"
	I0914 22:48:03.800449   45407 addons.go:231] Setting addon storage-provisioner=true in "no-preload-344363"
	W0914 22:48:03.800462   45407 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:48:03.800511   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800394   45407 addons.go:69] Setting metrics-server=true in profile "no-preload-344363"
	I0914 22:48:03.800543   45407 addons.go:231] Setting addon metrics-server=true in "no-preload-344363"
	W0914 22:48:03.800558   45407 addons.go:240] addon metrics-server should already be in state true
	I0914 22:48:03.800590   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800388   45407 addons.go:69] Setting default-storageclass=true in profile "no-preload-344363"
	I0914 22:48:03.800633   45407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-344363"
	I0914 22:48:03.800411   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:48:03.800906   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800909   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800944   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.801011   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.801054   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.800968   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.804911   45407 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-344363" context rescaled to 1 replicas
	I0914 22:48:03.804946   45407 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:48:03.807503   45407 out.go:177] * Verifying Kubernetes components...
	I0914 22:47:59.973913   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:01.974625   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:03.808768   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:48:03.816774   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0914 22:48:03.816773   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0914 22:48:03.817265   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817518   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817791   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.817821   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818011   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.818032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818223   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818407   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818431   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.818976   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.819027   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.829592   45407 addons.go:231] Setting addon default-storageclass=true in "no-preload-344363"
	W0914 22:48:03.829614   45407 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:48:03.829641   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.830013   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.830047   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.835514   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0914 22:48:03.835935   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.836447   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.836473   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.836841   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.837011   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.838909   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.843677   45407 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:48:03.845231   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:48:03.845246   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:48:03.845261   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.844291   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I0914 22:48:03.845685   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.846224   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.846242   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.846572   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.847073   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.847103   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.847332   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0914 22:48:03.848400   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.848666   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849160   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.849182   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.849263   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.849283   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849314   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.849461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.849570   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.849635   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.849682   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.850555   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.850585   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.863035   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0914 22:48:03.863559   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864010   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.864204   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0914 22:48:03.864478   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.864526   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864752   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.864936   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864955   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.865261   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.865489   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.866474   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.868300   45407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:48:03.867504   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.869841   45407 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:03.869855   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:48:03.869874   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.870067   45407 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:03.870078   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:48:03.870091   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.873462   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.873859   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.873882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874026   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874114   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.874287   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.874397   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.874903   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874949   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.874980   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.875135   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.875301   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.875486   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.956934   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:48:03.956956   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:48:03.973872   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:48:03.973896   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:48:04.002028   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.002051   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:48:04.018279   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:04.037990   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:04.047125   45407 node_ready.go:35] waiting up to 6m0s for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:04.047292   45407 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:48:04.086299   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.991926   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.991952   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992225   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992292   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992324   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992342   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992364   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992614   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992634   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992649   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992657   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992665   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992914   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992933   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:01.898769   46713 retry.go:31] will retry after 9.475485234s: kubelet not initialised
	I0914 22:48:05.528027   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.490009157s)
	I0914 22:48:05.528078   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528087   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528435   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528457   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528470   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528436   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.528481   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528802   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528824   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528829   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.600274   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.51392997s)
	I0914 22:48:05.600338   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600351   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.600645   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.600670   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.600682   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600695   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.602502   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.602513   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.602524   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.602546   45407 addons.go:467] Verifying addon metrics-server=true in "no-preload-344363"
	I0914 22:48:05.604330   45407 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 22:48:02.491577   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.995014   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.474529   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:06.474964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:05.605648   45407 addons.go:502] enable addons completed in 1.805353931s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 22:48:06.198114   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:08.199023   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:07.490770   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:09.991693   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:08.974469   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:11.474711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:10.698198   45407 node_ready.go:49] node "no-preload-344363" has status "Ready":"True"
	I0914 22:48:10.698218   45407 node_ready.go:38] duration metric: took 6.651066752s waiting for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:10.698227   45407 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:10.704694   45407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710103   45407 pod_ready.go:92] pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:10.710119   45407 pod_ready.go:81] duration metric: took 5.400404ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710128   45407 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.747445   45407 pod_ready.go:102] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.229927   45407 pod_ready.go:92] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:13.229953   45407 pod_ready.go:81] duration metric: took 2.519818297s waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:13.229966   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747126   45407 pod_ready.go:92] pod "kube-apiserver-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.747147   45407 pod_ready.go:81] duration metric: took 1.51717338s waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747157   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752397   45407 pod_ready.go:92] pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.752413   45407 pod_ready.go:81] duration metric: took 5.250049ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752420   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.380752   46713 kubeadm.go:787] kubelet initialised
	I0914 22:48:11.380783   46713 kubeadm.go:788] duration metric: took 37.789831498s waiting for restarted kubelet to initialise ...
	I0914 22:48:11.380793   46713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:11.386189   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392948   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.392970   46713 pod_ready.go:81] duration metric: took 6.75113ms waiting for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392981   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398606   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.398627   46713 pod_ready.go:81] duration metric: took 5.638835ms waiting for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398639   46713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404145   46713 pod_ready.go:92] pod "etcd-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.404174   46713 pod_ready.go:81] duration metric: took 5.527173ms waiting for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404187   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409428   46713 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.409448   46713 pod_ready.go:81] duration metric: took 5.252278ms waiting for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409461   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779225   46713 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.779252   46713 pod_ready.go:81] duration metric: took 369.782336ms waiting for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779267   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179256   46713 pod_ready.go:92] pod "kube-proxy-l4qz4" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.179277   46713 pod_ready.go:81] duration metric: took 400.003039ms waiting for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179286   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578889   46713 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.578921   46713 pod_ready.go:81] duration metric: took 399.627203ms waiting for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578935   46713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:12.491274   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:14.991146   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.991799   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.974725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.473917   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.474722   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:15.099588   45407 pod_ready.go:92] pod "kube-proxy-zzkbp" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.099612   45407 pod_ready.go:81] duration metric: took 347.18498ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.099623   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498642   45407 pod_ready.go:92] pod "kube-scheduler-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.498664   45407 pod_ready.go:81] duration metric: took 399.034277ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498678   45407 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:17.806138   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.887157   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:19.390361   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.991911   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.993133   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.474578   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.305450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:22.305521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:24.306131   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:21.885143   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.886722   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.490126   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.991185   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.974547   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.473850   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.805651   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.306125   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.384992   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.385266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.385877   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:27.991827   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.991995   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.475603   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.974568   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:31.806483   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.306121   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.886341   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.385506   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.488948   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.490950   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.989621   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.474815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.973407   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.806806   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.806988   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.886043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.386865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.991151   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:41.491384   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:39.974109   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.473010   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.808362   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.305126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.886094   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.386710   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.991121   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.992500   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:44.475120   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:46.973837   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.305212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.305740   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.806334   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.886380   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.887578   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:48.490416   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:50.990196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.474209   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.474657   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.808853   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.305742   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.888488   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.385591   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:52.990333   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.991549   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:53.974301   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:55.976250   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.474372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.807759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.304597   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.885164   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.885809   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:57.491267   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.492043   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.991231   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:00.974064   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:02.975136   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.808275   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.385492   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.385865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:05.386266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.992513   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.490253   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:04.975537   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.473413   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.306066   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.805711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.886495   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.386100   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.995545   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.490960   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:09.476367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.974480   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.807870   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.306759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:12.386166   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.990090   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.489864   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.975102   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.474761   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.475314   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:15.809041   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.305700   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:17.385490   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:19.386201   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.490727   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.493813   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.973383   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.973978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.306906   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.805781   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.806417   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:21.387171   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:23.394663   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.989981   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.998602   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.975048   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.473804   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.805993   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:25.886256   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:28.385307   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:30.386473   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.490860   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.991665   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.992373   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.475815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.973092   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.305648   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.806797   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.886577   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.386203   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.490086   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:36.490465   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:33.973662   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.974041   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.473275   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.306848   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.806295   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.388154   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.886447   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.490850   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.989734   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.473543   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.473711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:41.807197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.305572   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.385788   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.386844   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.995794   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:45.490630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.474251   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.974425   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.306070   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.805530   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.886095   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.888504   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:47.491269   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.990921   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.474354   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.973552   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:50.806526   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.807021   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.385411   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.385825   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.490166   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:54.991982   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.974372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:56.473350   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.305863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.306450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.308315   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.886560   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.886950   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.386043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.490604   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.490811   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.993715   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:58.973152   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.975078   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.474589   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.806409   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.806552   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:02.387458   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.886066   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.490551   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:06.490632   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.974290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.974714   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.810256   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.305443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.386252   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:09.887808   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.490994   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.990417   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.474207   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.973759   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.305662   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.807626   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.385387   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.386055   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.991196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.489856   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.974362   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.474890   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.305348   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.306521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.306661   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:16.386682   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:18.386805   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.491969   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.990884   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.991904   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.476052   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.973290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.806863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.810113   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:20.886118   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.388653   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:24.490861   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.991437   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.474556   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.307894   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.809126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:25.885409   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:27.886080   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.386151   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:29.489358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.491041   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.973725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.975342   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.474590   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.306171   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.307126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:32.386190   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:34.886414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.491383   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.492155   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.974978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:38.473506   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.307221   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.806174   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.386235   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.886579   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.990447   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.991649   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.474117   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.973778   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.308130   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.806411   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.807765   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.385199   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.387102   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.491019   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.993076   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.974689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.473863   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.305509   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.305825   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:46.885280   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.385189   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.491661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.989457   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.991512   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.973709   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.976112   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.306459   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.805441   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.386498   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.887424   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.492074   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.989668   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.473073   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.473689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.474597   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:55.806711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.305434   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.386640   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.885298   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.995348   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:01.491262   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.974371   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.474367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.305803   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.806120   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:04.807184   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.886357   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.887274   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:05.386976   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.708637   45954 pod_ready.go:81] duration metric: took 4m0.000105295s waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:03.708672   45954 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:03.708681   45954 pod_ready.go:38] duration metric: took 4m6.567418041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:03.708699   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:51:03.708739   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:03.708804   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:03.759664   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:03.759688   45954 cri.go:89] found id: ""
	I0914 22:51:03.759697   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:03.759753   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.764736   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:03.764789   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:03.800251   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:03.800280   45954 cri.go:89] found id: ""
	I0914 22:51:03.800290   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:03.800341   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.804761   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:03.804818   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:03.847136   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:03.847162   45954 cri.go:89] found id: ""
	I0914 22:51:03.847172   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:03.847215   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.851253   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:03.851325   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:03.882629   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:03.882654   45954 cri.go:89] found id: ""
	I0914 22:51:03.882664   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:03.882713   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.887586   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:03.887642   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:03.916702   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:03.916723   45954 cri.go:89] found id: ""
	I0914 22:51:03.916730   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:03.916773   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.921172   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:03.921232   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:03.950593   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:03.950618   45954 cri.go:89] found id: ""
	I0914 22:51:03.950628   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:03.950689   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.954303   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:03.954366   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:03.982565   45954 cri.go:89] found id: ""
	I0914 22:51:03.982588   45954 logs.go:284] 0 containers: []
	W0914 22:51:03.982597   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:03.982604   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:03.982662   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:04.011932   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.011957   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:04.011964   45954 cri.go:89] found id: ""
	I0914 22:51:04.011972   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:04.012026   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.016091   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.019830   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:04.019852   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:04.061469   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:04.061494   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:04.092823   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:04.092846   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:04.156150   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:04.156190   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:04.169879   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:04.169920   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:04.226165   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:04.226198   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.255658   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:04.255692   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:04.299368   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:04.299401   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:04.440433   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:04.440467   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:04.477396   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:04.477425   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:04.513399   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:04.513431   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:05.016889   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:05.016925   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:05.067712   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:05.067749   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:05.973423   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.973637   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.307754   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.805419   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.389465   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.885150   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.597529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:51:07.614053   45954 api_server.go:72] duration metric: took 4m15.435815174s to wait for apiserver process to appear ...
	I0914 22:51:07.614076   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:51:07.614106   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:07.614155   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:07.643309   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:07.643333   45954 cri.go:89] found id: ""
	I0914 22:51:07.643342   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:07.643411   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.647434   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:07.647511   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:07.676943   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:07.676959   45954 cri.go:89] found id: ""
	I0914 22:51:07.676966   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:07.677006   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.681053   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:07.681101   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:07.714710   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:07.714736   45954 cri.go:89] found id: ""
	I0914 22:51:07.714745   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:07.714807   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.718900   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:07.718966   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:07.754786   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:07.754808   45954 cri.go:89] found id: ""
	I0914 22:51:07.754815   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:07.754867   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.759623   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:07.759693   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:07.794366   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:07.794389   45954 cri.go:89] found id: ""
	I0914 22:51:07.794398   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:07.794457   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.798717   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:07.798777   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:07.831131   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:07.831158   45954 cri.go:89] found id: ""
	I0914 22:51:07.831167   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:07.831227   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.835696   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:07.835762   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:07.865802   45954 cri.go:89] found id: ""
	I0914 22:51:07.865831   45954 logs.go:284] 0 containers: []
	W0914 22:51:07.865841   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:07.865849   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:07.865905   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:07.895025   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:07.895049   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:07.895056   45954 cri.go:89] found id: ""
	I0914 22:51:07.895064   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:07.895118   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.899230   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.903731   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:07.903751   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:08.033922   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:08.033952   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:08.068784   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:08.068812   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:08.120395   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:08.120428   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:08.133740   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:08.133763   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:08.173288   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:08.173320   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:08.203964   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:08.203988   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:08.732327   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:08.732367   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:08.784110   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:08.784150   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:08.819179   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:08.819210   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:08.866612   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:08.866644   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:08.900892   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:08.900939   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:08.950563   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:08.950593   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:11.505428   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:51:11.511228   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:51:11.512855   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:51:11.512881   45954 api_server.go:131] duration metric: took 3.898798182s to wait for apiserver health ...
	I0914 22:51:11.512891   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:51:11.512911   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:11.512954   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:11.544538   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:11.544563   45954 cri.go:89] found id: ""
	I0914 22:51:11.544573   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:11.544629   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.548878   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:11.548946   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:11.578439   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:11.578464   45954 cri.go:89] found id: ""
	I0914 22:51:11.578473   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:11.578531   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.582515   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:11.582576   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:11.611837   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:11.611857   45954 cri.go:89] found id: ""
	I0914 22:51:11.611863   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:11.611917   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.615685   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:11.615744   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:11.645850   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:11.645869   45954 cri.go:89] found id: ""
	I0914 22:51:11.645876   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:11.645914   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.649995   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:11.650048   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:11.683515   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:11.683541   45954 cri.go:89] found id: ""
	I0914 22:51:11.683550   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:11.683604   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.687715   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:11.687806   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:11.721411   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.721428   45954 cri.go:89] found id: ""
	I0914 22:51:11.721434   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:11.721477   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.725801   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:11.725859   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:11.760391   45954 cri.go:89] found id: ""
	I0914 22:51:11.760417   45954 logs.go:284] 0 containers: []
	W0914 22:51:11.760427   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:11.760437   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:11.760498   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:11.792140   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.792162   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:11.792168   45954 cri.go:89] found id: ""
	I0914 22:51:11.792175   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:11.792234   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.796600   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.800888   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:11.800912   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:11.863075   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:11.863106   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:11.877744   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:11.877775   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.930381   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:11.930418   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.961471   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:11.961497   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:12.005391   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:12.005417   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:12.034742   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:12.034771   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:12.064672   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:12.064702   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:12.095801   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:12.095834   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:12.124224   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:12.124260   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:09.974433   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.975389   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.806380   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.807443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:12.657331   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:12.657375   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:12.718197   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:12.718227   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:12.845353   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:12.845381   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:15.395502   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:51:15.395524   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.395529   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.395534   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.395540   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.395544   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.395548   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.395554   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.395559   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.395565   45954 system_pods.go:74] duration metric: took 3.882669085s to wait for pod list to return data ...
	I0914 22:51:15.395572   45954 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:51:15.398128   45954 default_sa.go:45] found service account: "default"
	I0914 22:51:15.398148   45954 default_sa.go:55] duration metric: took 2.571314ms for default service account to be created ...
	I0914 22:51:15.398155   45954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:51:15.407495   45954 system_pods.go:86] 8 kube-system pods found
	I0914 22:51:15.407517   45954 system_pods.go:89] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.407522   45954 system_pods.go:89] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.407527   45954 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.407532   45954 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.407535   45954 system_pods.go:89] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.407540   45954 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.407549   45954 system_pods.go:89] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.407558   45954 system_pods.go:89] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.407576   45954 system_pods.go:126] duration metric: took 9.409452ms to wait for k8s-apps to be running ...
	I0914 22:51:15.407587   45954 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:51:15.407633   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:15.424728   45954 system_svc.go:56] duration metric: took 17.122868ms WaitForService to wait for kubelet.
	I0914 22:51:15.424754   45954 kubeadm.go:581] duration metric: took 4m23.246518879s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:51:15.424794   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:51:15.428492   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:51:15.428520   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:51:15.428534   45954 node_conditions.go:105] duration metric: took 3.733977ms to run NodePressure ...
	I0914 22:51:15.428550   45954 start.go:228] waiting for startup goroutines ...
	I0914 22:51:15.428563   45954 start.go:233] waiting for cluster config update ...
	I0914 22:51:15.428576   45954 start.go:242] writing updated cluster config ...
	I0914 22:51:15.428887   45954 ssh_runner.go:195] Run: rm -f paused
	I0914 22:51:15.479711   45954 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:51:15.482387   45954 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799144" cluster and "default" namespace by default
	I0914 22:51:11.885968   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.887391   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:14.474188   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.974056   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.306146   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.806037   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.386306   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.386406   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:19.474446   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:21.474860   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.375841   46412 pod_ready.go:81] duration metric: took 4m0.000552226s waiting for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:22.375872   46412 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:22.375890   46412 pod_ready.go:38] duration metric: took 4m12.961510371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:22.375915   46412 kubeadm.go:640] restartCluster took 4m33.075347594s
	W0914 22:51:22.375983   46412 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:51:22.376022   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:51:20.806249   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.807141   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:24.809235   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:20.888098   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:23.386482   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:25.386542   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.305114   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:29.306240   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.886695   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:30.385740   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:31.306508   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:33.306655   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:32.886111   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.384925   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.805992   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:38.307801   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:37.385344   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:39.888303   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:40.806212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:43.305815   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:42.388414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:44.388718   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:45.306197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:47.806983   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:49.807150   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:46.885737   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:48.885794   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.115476   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.73941793s)
	I0914 22:51:53.115549   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:53.128821   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:51:53.137267   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:51:53.145533   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:51:53.145569   46412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 22:51:53.202279   46412 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 22:51:53.202501   46412 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:51:53.353512   46412 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:51:53.353674   46412 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:51:53.353816   46412 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:51:53.513428   46412 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:51:53.515450   46412 out.go:204]   - Generating certificates and keys ...
	I0914 22:51:53.515574   46412 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:51:53.515672   46412 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:51:53.515785   46412 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:51:53.515896   46412 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:51:53.516234   46412 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:51:53.516841   46412 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:51:53.517488   46412 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:51:53.517974   46412 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:51:53.518563   46412 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:51:53.519109   46412 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:51:53.519728   46412 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:51:53.519809   46412 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:51:53.641517   46412 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:51:53.842920   46412 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:51:53.982500   46412 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:51:54.065181   46412 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:51:54.065678   46412 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:51:54.071437   46412 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:51:52.305643   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.305996   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:51.386246   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.386956   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.073206   46412 out.go:204]   - Booting up control plane ...
	I0914 22:51:54.073363   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:51:54.073470   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:51:54.073554   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:51:54.091178   46412 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:51:54.091289   46412 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:51:54.091371   46412 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 22:51:54.221867   46412 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:51:56.306473   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:58.306953   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:55.886624   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:57.887222   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:00.385756   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.225144   46412 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002879 seconds
	I0914 22:52:02.225314   46412 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:02.244705   46412 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:02.778808   46412 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:02.779047   46412 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-588699 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 22:52:03.296381   46412 kubeadm.go:322] [bootstrap-token] Using token: x2l9oo.p0a5g5jx49srzji3
	I0914 22:52:03.297976   46412 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:03.298091   46412 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:03.308475   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:52:03.319954   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:03.325968   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:03.330375   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:03.334686   46412 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:03.353185   46412 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:52:03.622326   46412 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:03.721359   46412 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:03.721385   46412 kubeadm.go:322] 
	I0914 22:52:03.721472   46412 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:03.721486   46412 kubeadm.go:322] 
	I0914 22:52:03.721589   46412 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:03.721602   46412 kubeadm.go:322] 
	I0914 22:52:03.721623   46412 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:03.721678   46412 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:03.721727   46412 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:03.721764   46412 kubeadm.go:322] 
	I0914 22:52:03.721856   46412 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 22:52:03.721867   46412 kubeadm.go:322] 
	I0914 22:52:03.721945   46412 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 22:52:03.721954   46412 kubeadm.go:322] 
	I0914 22:52:03.722029   46412 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:03.722137   46412 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:03.722240   46412 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:03.722250   46412 kubeadm.go:322] 
	I0914 22:52:03.722367   46412 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:52:03.722468   46412 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:03.722479   46412 kubeadm.go:322] 
	I0914 22:52:03.722583   46412 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.722694   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:03.722719   46412 kubeadm.go:322] 	--control-plane 
	I0914 22:52:03.722752   46412 kubeadm.go:322] 
	I0914 22:52:03.722887   46412 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:03.722912   46412 kubeadm.go:322] 
	I0914 22:52:03.723031   46412 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.723170   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:03.724837   46412 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:03.724867   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:52:03.724879   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:03.726645   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:03.728115   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:03.741055   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:03.811746   46412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:03.811823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=embed-certs-588699 minikube.k8s.io/updated_at=2023_09_14T22_52_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:03.811827   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:00.805633   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.805831   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.807503   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.885499   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.886940   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.097721   46412 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:04.097763   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.186240   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.773886   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.273494   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.773993   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.274084   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.773309   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.273666   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.773916   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.274226   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.774073   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.807538   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.306062   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:06.886980   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.385212   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.274041   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:09.773409   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.274272   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.774321   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.274268   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.774250   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.273823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.774000   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.273596   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.774284   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.806015   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:14.308997   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:11.386087   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:12.580003   46713 pod_ready.go:81] duration metric: took 4m0.001053291s waiting for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:12.580035   46713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:12.580062   46713 pod_ready.go:38] duration metric: took 4m1.199260232s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:12.580089   46713 kubeadm.go:640] restartCluster took 4m59.591702038s
	W0914 22:52:12.580145   46713 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:52:12.580169   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:52:14.274174   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:14.773472   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.273376   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.773286   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.273920   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.773334   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.926033   46412 kubeadm.go:1081] duration metric: took 13.114277677s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:16.926076   46412 kubeadm.go:406] StartCluster complete in 5m27.664586228s
	I0914 22:52:16.926099   46412 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.926229   46412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:16.928891   46412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.929177   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:16.929313   46412 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:16.929393   46412 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-588699"
	I0914 22:52:16.929408   46412 addons.go:69] Setting default-storageclass=true in profile "embed-certs-588699"
	I0914 22:52:16.929423   46412 addons.go:69] Setting metrics-server=true in profile "embed-certs-588699"
	I0914 22:52:16.929435   46412 addons.go:231] Setting addon metrics-server=true in "embed-certs-588699"
	W0914 22:52:16.929446   46412 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:16.929446   46412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-588699"
	I0914 22:52:16.929475   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:52:16.929508   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929418   46412 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-588699"
	W0914 22:52:16.929533   46412 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:16.929574   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929907   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929938   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929939   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929963   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929968   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929995   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.948975   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41151
	I0914 22:52:16.948990   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0914 22:52:16.948977   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0914 22:52:16.949953   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950006   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.949957   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950601   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950607   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950620   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950626   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950632   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950647   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.951159   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951191   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951410   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951808   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951829   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.951896   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951906   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.951911   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.961182   46412 addons.go:231] Setting addon default-storageclass=true in "embed-certs-588699"
	W0914 22:52:16.961207   46412 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:16.961236   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.961615   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.961637   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.976517   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0914 22:52:16.976730   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0914 22:52:16.977005   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977161   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977448   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977466   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977564   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977589   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977781   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977913   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977966   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.978108   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.980084   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.980429   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.982113   46412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:16.983227   46412 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:16.984383   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:16.984394   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:16.984407   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.983307   46412 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:16.984439   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:16.984455   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.987850   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0914 22:52:16.987989   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988270   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.988771   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.988788   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.988849   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.988867   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988894   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.989058   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.989528   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.989748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.990151   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.990172   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.990441   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:16.990597   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.990766   46412 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-588699" context rescaled to 1 replicas
	I0914 22:52:16.990794   46412 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:16.992351   46412 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:16.990986   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.991129   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.994003   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:16.994015   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.994097   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.994300   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.994607   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.007652   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0914 22:52:17.008127   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:17.008676   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:17.008699   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:17.009115   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:17.009301   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:17.010905   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:17.011169   46412 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.011183   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:17.011201   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:17.014427   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.014837   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:17.014865   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.015132   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:17.015299   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:17.015435   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:17.015585   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.124720   46412 node_ready.go:35] waiting up to 6m0s for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.124831   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:17.128186   46412 node_ready.go:49] node "embed-certs-588699" has status "Ready":"True"
	I0914 22:52:17.128211   46412 node_ready.go:38] duration metric: took 3.459847ms waiting for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.128221   46412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.133021   46412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138574   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.138594   46412 pod_ready.go:81] duration metric: took 5.550933ms waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138605   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151548   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.151569   46412 pod_ready.go:81] duration metric: took 12.956129ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151581   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169368   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.169393   46412 pod_ready.go:81] duration metric: took 17.803681ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169406   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.180202   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:17.180227   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:17.184052   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:17.227381   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:17.227411   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:17.233773   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.293762   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:17.293788   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:17.328911   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.328934   46412 pod_ready.go:81] duration metric: took 159.520585ms waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.328942   46412 pod_ready.go:38] duration metric: took 200.709608ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.328958   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:17.329008   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:17.379085   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:18.947663   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.822786746s)
	I0914 22:52:18.947705   46412 start.go:917] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:19.171809   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.937996858s)
	I0914 22:52:19.171861   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171872   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.98779094s)
	I0914 22:52:19.171908   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171927   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171878   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171875   46412 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.842825442s)
	I0914 22:52:19.172234   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172277   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172292   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172289   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172307   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172322   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172352   46412 api_server.go:72] duration metric: took 2.181532709s to wait for apiserver process to appear ...
	I0914 22:52:19.172322   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172369   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.172377   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172387   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172396   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172410   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:52:19.172625   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172643   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172657   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172667   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172688   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172715   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172723   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172955   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172969   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.173012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.205041   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:52:19.209533   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:19.209561   46412 api_server.go:131] duration metric: took 37.185195ms to wait for apiserver health ...
	I0914 22:52:19.209573   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:19.225866   46412 system_pods.go:59] 7 kube-system pods found
	I0914 22:52:19.225893   46412 system_pods.go:61] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.225900   46412 system_pods.go:61] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.225908   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.225915   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.225921   46412 system_pods.go:61] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.225928   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.225934   46412 system_pods.go:61] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending
	I0914 22:52:19.225947   46412 system_pods.go:74] duration metric: took 16.366454ms to wait for pod list to return data ...
	I0914 22:52:19.225958   46412 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:19.232176   46412 default_sa.go:45] found service account: "default"
	I0914 22:52:19.232202   46412 default_sa.go:55] duration metric: took 6.234795ms for default service account to be created ...
	I0914 22:52:19.232221   46412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:19.238383   46412 system_pods.go:86] 7 kube-system pods found
	I0914 22:52:19.238415   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.238426   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.238433   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.238442   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.238448   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.238454   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.238463   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.238486   46412 retry.go:31] will retry after 271.864835ms: missing components: kube-dns
	I0914 22:52:19.431792   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.052667289s)
	I0914 22:52:19.431858   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.431875   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432217   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432254   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432265   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432277   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.432291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432561   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432615   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432626   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432637   46412 addons.go:467] Verifying addon metrics-server=true in "embed-certs-588699"
	I0914 22:52:19.434406   46412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:15.499654   45407 pod_ready.go:81] duration metric: took 4m0.00095032s waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:15.499683   45407 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:15.499692   45407 pod_ready.go:38] duration metric: took 4m4.80145633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:15.499709   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:15.499741   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:15.499821   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:15.551531   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:15.551573   45407 cri.go:89] found id: ""
	I0914 22:52:15.551584   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:15.551638   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.555602   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:15.555649   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:15.583476   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:15.583497   45407 cri.go:89] found id: ""
	I0914 22:52:15.583504   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:15.583541   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.587434   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:15.587499   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:15.614791   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:15.614813   45407 cri.go:89] found id: ""
	I0914 22:52:15.614821   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:15.614865   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.618758   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:15.618813   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:15.651772   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:15.651798   45407 cri.go:89] found id: ""
	I0914 22:52:15.651807   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:15.651862   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.656464   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:15.656533   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:15.701258   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:15.701289   45407 cri.go:89] found id: ""
	I0914 22:52:15.701299   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:15.701359   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.705980   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:15.706049   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:15.741616   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:15.741640   45407 cri.go:89] found id: ""
	I0914 22:52:15.741647   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:15.741702   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.745863   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:15.745913   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:15.779362   45407 cri.go:89] found id: ""
	I0914 22:52:15.779385   45407 logs.go:284] 0 containers: []
	W0914 22:52:15.779395   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:15.779403   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:15.779462   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:15.815662   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:15.815691   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.815698   45407 cri.go:89] found id: ""
	I0914 22:52:15.815707   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:15.815781   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.820879   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.826312   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:15.826338   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.864143   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:15.864175   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:16.401646   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:16.401689   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:16.442964   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:16.443000   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:16.612411   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:16.612444   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:16.664620   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:16.664652   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:16.702405   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:16.702432   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:16.738583   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:16.738615   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:16.752752   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:16.752788   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:16.793883   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:16.793924   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:16.825504   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:16.825531   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:16.879008   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:16.879046   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:16.910902   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:16.910941   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.477726   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:19.494214   45407 api_server.go:72] duration metric: took 4m15.689238s to wait for apiserver process to appear ...
	I0914 22:52:19.494240   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.494281   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:19.494341   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:19.534990   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:19.535014   45407 cri.go:89] found id: ""
	I0914 22:52:19.535023   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:19.535081   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.540782   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:19.540850   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:19.570364   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:19.570390   45407 cri.go:89] found id: ""
	I0914 22:52:19.570399   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:19.570465   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.575964   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:19.576027   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:19.608023   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:19.608047   45407 cri.go:89] found id: ""
	I0914 22:52:19.608056   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:19.608098   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.612290   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:19.612343   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:19.644658   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:19.644682   45407 cri.go:89] found id: ""
	I0914 22:52:19.644692   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:19.644743   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.651016   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:19.651092   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:19.693035   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:19.693059   45407 cri.go:89] found id: ""
	I0914 22:52:19.693068   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:19.693122   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.697798   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:19.697864   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:19.733805   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.733828   45407 cri.go:89] found id: ""
	I0914 22:52:19.733837   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:19.733890   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.737902   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:19.737976   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:19.765139   45407 cri.go:89] found id: ""
	I0914 22:52:19.765169   45407 logs.go:284] 0 containers: []
	W0914 22:52:19.765180   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:19.765188   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:19.765248   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:19.793734   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.793756   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:19.793761   45407 cri.go:89] found id: ""
	I0914 22:52:19.793767   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:19.793807   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.797559   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.801472   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:19.801492   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:19.937110   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:19.937138   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.987564   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:19.987599   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.436138   46412 addons.go:502] enable addons completed in 2.506819532s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:19.523044   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.523077   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.523089   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.523096   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.523103   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.523109   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.523115   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.523124   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.523137   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.523164   46412 retry.go:31] will retry after 369.359833ms: missing components: kube-dns
	I0914 22:52:19.900488   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.900529   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.900541   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.900550   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.900558   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.900564   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.900571   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.900587   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.900608   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.900630   46412 retry.go:31] will retry after 329.450987ms: missing components: kube-dns
	I0914 22:52:20.245124   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.245152   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.245160   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.245166   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.245171   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.245177   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.245185   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.245194   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.245204   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.245225   46412 retry.go:31] will retry after 392.738624ms: missing components: kube-dns
	I0914 22:52:20.645671   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.645706   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.645716   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.645725   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.645737   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.645747   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.645756   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.645770   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.645783   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.645803   46412 retry.go:31] will retry after 463.608084ms: missing components: kube-dns
	I0914 22:52:21.118889   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:21.118920   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Running
	I0914 22:52:21.118926   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:21.118931   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:21.118937   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:21.118941   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:21.118946   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:21.118954   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:21.118963   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:21.118971   46412 system_pods.go:126] duration metric: took 1.886741356s to wait for k8s-apps to be running ...
	I0914 22:52:21.118984   46412 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:21.119025   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:21.134331   46412 system_svc.go:56] duration metric: took 15.34035ms WaitForService to wait for kubelet.
	I0914 22:52:21.134358   46412 kubeadm.go:581] duration metric: took 4.143541631s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:21.134381   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:21.137182   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:21.137207   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:21.137230   46412 node_conditions.go:105] duration metric: took 2.834168ms to run NodePressure ...
	I0914 22:52:21.137243   46412 start.go:228] waiting for startup goroutines ...
	I0914 22:52:21.137252   46412 start.go:233] waiting for cluster config update ...
	I0914 22:52:21.137272   46412 start.go:242] writing updated cluster config ...
	I0914 22:52:21.137621   46412 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:21.184252   46412 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:21.186251   46412 out.go:177] * Done! kubectl is now configured to use "embed-certs-588699" cluster and "default" namespace by default
	I0914 22:52:20.022483   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:20.022512   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:20.062375   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:20.062403   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:20.099744   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:20.099776   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:20.129490   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:20.129515   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:20.165896   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:20.165922   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:20.692724   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:20.692758   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:20.761038   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:20.761086   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:20.777087   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:20.777114   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:20.808980   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:20.809020   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:20.845904   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:20.845942   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.393816   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:52:23.399946   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:52:23.401251   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:23.401271   45407 api_server.go:131] duration metric: took 3.907024801s to wait for apiserver health ...
	I0914 22:52:23.401279   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:23.401303   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:23.401346   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:23.433871   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.433895   45407 cri.go:89] found id: ""
	I0914 22:52:23.433905   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:23.433962   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.438254   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:23.438317   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:23.468532   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:23.468555   45407 cri.go:89] found id: ""
	I0914 22:52:23.468564   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:23.468626   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.473599   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:23.473658   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:23.509951   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:23.509976   45407 cri.go:89] found id: ""
	I0914 22:52:23.509986   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:23.510041   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.516637   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:23.516722   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:23.549562   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.549587   45407 cri.go:89] found id: ""
	I0914 22:52:23.549596   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:23.549653   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.553563   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:23.553626   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:23.584728   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:23.584749   45407 cri.go:89] found id: ""
	I0914 22:52:23.584756   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:23.584797   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.588600   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:23.588653   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:23.616590   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.616609   45407 cri.go:89] found id: ""
	I0914 22:52:23.616617   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:23.616669   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.620730   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:23.620782   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:23.648741   45407 cri.go:89] found id: ""
	I0914 22:52:23.648765   45407 logs.go:284] 0 containers: []
	W0914 22:52:23.648773   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:23.648781   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:23.648831   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:23.680814   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:23.680839   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:23.680846   45407 cri.go:89] found id: ""
	I0914 22:52:23.680854   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:23.680914   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.685954   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.690428   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:23.690459   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:23.818421   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:23.818456   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.867863   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:23.867894   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.903362   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:23.903393   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:23.943793   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:23.943820   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:24.538337   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:24.538390   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:24.585031   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:24.585072   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:24.639086   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:24.639120   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:24.650905   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:24.650925   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:24.698547   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:24.698590   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:24.745590   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:24.745619   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:24.777667   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:24.777697   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:24.811536   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:24.811565   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:25.132299   46713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (12.552094274s)
	I0914 22:52:25.132371   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:25.146754   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:52:25.155324   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:52:25.164387   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:52:25.164429   46713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0914 22:52:25.227970   46713 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0914 22:52:25.228029   46713 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:52:25.376482   46713 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:52:25.376603   46713 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:52:25.376721   46713 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:52:25.536163   46713 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:52:25.536339   46713 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:52:25.543555   46713 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0914 22:52:25.663579   46713 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:52:25.665315   46713 out.go:204]   - Generating certificates and keys ...
	I0914 22:52:25.665428   46713 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:52:25.665514   46713 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:52:25.665610   46713 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:52:25.665688   46713 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:52:25.665777   46713 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:52:25.665844   46713 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:52:25.665925   46713 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:52:25.666002   46713 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:52:25.666095   46713 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:52:25.666223   46713 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:52:25.666277   46713 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:52:25.666352   46713 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:52:25.931689   46713 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:52:26.088693   46713 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:52:26.251867   46713 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:52:26.566157   46713 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:52:26.567520   46713 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:52:27.360740   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:52:27.360780   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.360788   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.360795   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.360802   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.360809   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.360816   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.360827   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.360841   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.360848   45407 system_pods.go:74] duration metric: took 3.959563404s to wait for pod list to return data ...
	I0914 22:52:27.360859   45407 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:27.363690   45407 default_sa.go:45] found service account: "default"
	I0914 22:52:27.363715   45407 default_sa.go:55] duration metric: took 2.849311ms for default service account to be created ...
	I0914 22:52:27.363724   45407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:27.372219   45407 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:27.372520   45407 system_pods.go:89] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.372552   45407 system_pods.go:89] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.372571   45407 system_pods.go:89] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.372590   45407 system_pods.go:89] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.372602   45407 system_pods.go:89] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.372616   45407 system_pods.go:89] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.372744   45407 system_pods.go:89] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.372835   45407 system_pods.go:89] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.372845   45407 system_pods.go:126] duration metric: took 9.100505ms to wait for k8s-apps to be running ...
	I0914 22:52:27.372854   45407 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:27.373084   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:27.390112   45407 system_svc.go:56] duration metric: took 17.249761ms WaitForService to wait for kubelet.
	I0914 22:52:27.390137   45407 kubeadm.go:581] duration metric: took 4m23.585167656s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:27.390174   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:27.393099   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:27.393123   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:27.393133   45407 node_conditions.go:105] duration metric: took 2.953927ms to run NodePressure ...
	I0914 22:52:27.393142   45407 start.go:228] waiting for startup goroutines ...
	I0914 22:52:27.393148   45407 start.go:233] waiting for cluster config update ...
	I0914 22:52:27.393156   45407 start.go:242] writing updated cluster config ...
	I0914 22:52:27.393379   45407 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:27.441228   45407 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:27.442889   45407 out.go:177] * Done! kubectl is now configured to use "no-preload-344363" cluster and "default" namespace by default
	I0914 22:52:26.569354   46713 out.go:204]   - Booting up control plane ...
	I0914 22:52:26.569484   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:52:26.582407   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:52:26.589858   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:52:26.591607   46713 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:52:26.596764   46713 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:52:37.101083   46713 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503887 seconds
	I0914 22:52:37.101244   46713 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:37.116094   46713 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:37.633994   46713 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:37.634186   46713 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-930717 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 22:52:38.144071   46713 kubeadm.go:322] [bootstrap-token] Using token: jnf2g9.h0rslaob8wj902ym
	I0914 22:52:38.145543   46713 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:38.145661   46713 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:38.153514   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:38.159575   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:38.164167   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:38.167903   46713 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:38.241317   46713 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:38.572283   46713 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:38.572309   46713 kubeadm.go:322] 
	I0914 22:52:38.572399   46713 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:38.572410   46713 kubeadm.go:322] 
	I0914 22:52:38.572526   46713 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:38.572547   46713 kubeadm.go:322] 
	I0914 22:52:38.572581   46713 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:38.572669   46713 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:38.572762   46713 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:38.572775   46713 kubeadm.go:322] 
	I0914 22:52:38.572836   46713 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:38.572926   46713 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:38.573012   46713 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:38.573020   46713 kubeadm.go:322] 
	I0914 22:52:38.573089   46713 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0914 22:52:38.573152   46713 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:38.573159   46713 kubeadm.go:322] 
	I0914 22:52:38.573222   46713 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573313   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:38.573336   46713 kubeadm.go:322]     --control-plane 	  
	I0914 22:52:38.573343   46713 kubeadm.go:322] 
	I0914 22:52:38.573406   46713 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:38.573414   46713 kubeadm.go:322] 
	I0914 22:52:38.573527   46713 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573687   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:38.574219   46713 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:38.574248   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:52:38.574261   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:38.575900   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:38.577300   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:38.587120   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:38.610197   46713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:38.610265   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.610267   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=old-k8s-version-930717 minikube.k8s.io/updated_at=2023_09_14T22_52_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.858082   46713 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:38.858297   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.960045   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:39.549581   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.049788   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.549998   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.049043   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.549875   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.049596   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.549039   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.049563   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.549663   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.049534   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.549938   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.049227   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.549171   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.049628   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.550019   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.049857   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.549272   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.049648   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.549709   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.049770   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.550050   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.048948   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.549154   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.049695   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.549811   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.049813   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.549858   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.049505   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.149056   46713 kubeadm.go:1081] duration metric: took 14.538858246s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:53.149093   46713 kubeadm.go:406] StartCluster complete in 5m40.2118148s
	I0914 22:52:53.149114   46713 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.149200   46713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:53.150928   46713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.151157   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:53.151287   46713 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:53.151382   46713 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151391   46713 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151405   46713 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-930717"
	I0914 22:52:53.151411   46713 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-930717"
	W0914 22:52:53.151413   46713 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:53.151419   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:52:53.151423   46713 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-930717"
	W0914 22:52:53.151433   46713 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:53.151479   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151412   46713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-930717"
	I0914 22:52:53.151484   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151796   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151820   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151958   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.152044   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.170764   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0914 22:52:53.170912   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0914 22:52:53.171012   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0914 22:52:53.171235   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171345   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171378   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171850   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171870   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171970   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171991   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171999   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.172019   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.172232   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172517   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172572   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172759   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.172910   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.172987   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.173110   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.173146   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.189453   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0914 22:52:53.189789   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.190229   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.190251   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.190646   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.190822   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.192990   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.195176   46713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:53.194738   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0914 22:52:53.196779   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:53.196797   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:53.196813   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.195752   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.197457   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.197476   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.197849   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.198026   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.200022   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.200176   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.201917   46713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:53.200654   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.200795   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.203540   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.203632   46713 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.203652   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.203844   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.204002   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.206460   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.206968   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.206998   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.207153   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.207303   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.207524   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.207672   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.253944   46713 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-930717"
	W0914 22:52:53.253968   46713 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:53.253990   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.254330   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.254377   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0914 22:52:53.270047   46713 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-930717" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0914 22:52:53.270077   46713 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0914 22:52:53.270099   46713 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:53.271730   46713 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:53.270422   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0914 22:52:53.273255   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:53.273653   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.274180   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.274206   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.274559   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.275121   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.275165   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.291000   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0914 22:52:53.291405   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.291906   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.291927   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.292312   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.292529   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.294366   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.294583   46713 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.294598   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:53.294611   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.297265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.297809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297895   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.298057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.298236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.298383   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.344235   46713 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.344478   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:53.350176   46713 node_ready.go:49] node "old-k8s-version-930717" has status "Ready":"True"
	I0914 22:52:53.350196   46713 node_ready.go:38] duration metric: took 5.934445ms waiting for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.350204   46713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:53.359263   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:53.359296   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:53.367792   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:53.384576   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.397687   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:53.397703   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:53.439813   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:53.439843   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:53.473431   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.499877   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:54.233171   46713 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:54.365130   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365156   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365178   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365198   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365438   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365465   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365476   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365481   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.365486   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365546   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365556   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365565   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365574   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367064   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367090   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367068   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367489   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367513   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367526   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.367540   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367489   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367757   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367810   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367852   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.830646   46713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330728839s)
	I0914 22:52:54.830698   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.830711   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831036   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831059   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831065   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.831080   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.831096   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831312   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831328   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831338   46713 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-930717"
	I0914 22:52:54.832992   46713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:54.834828   46713 addons.go:502] enable addons completed in 1.683549699s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:55.415046   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:57.878279   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:59.879299   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:01.879559   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:03.880088   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:05.880334   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.880355   46713 pod_ready.go:81] duration metric: took 12.512536425s waiting for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.880364   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885370   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.885386   46713 pod_ready.go:81] duration metric: took 5.016722ms waiting for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885394   46713 pod_ready.go:38] duration metric: took 12.535181673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:53:05.885413   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:53:05.885466   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:53:05.901504   46713 api_server.go:72] duration metric: took 12.631380008s to wait for apiserver process to appear ...
	I0914 22:53:05.901522   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:53:05.901534   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:53:05.907706   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:53:05.908445   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:53:05.908466   46713 api_server.go:131] duration metric: took 6.937898ms to wait for apiserver health ...
	I0914 22:53:05.908475   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:53:05.911983   46713 system_pods.go:59] 5 kube-system pods found
	I0914 22:53:05.912001   46713 system_pods.go:61] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.912008   46713 system_pods.go:61] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.912013   46713 system_pods.go:61] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.912022   46713 system_pods.go:61] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.912033   46713 system_pods.go:61] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.912043   46713 system_pods.go:74] duration metric: took 3.562804ms to wait for pod list to return data ...
	I0914 22:53:05.912054   46713 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:53:05.914248   46713 default_sa.go:45] found service account: "default"
	I0914 22:53:05.914267   46713 default_sa.go:55] duration metric: took 2.203622ms for default service account to be created ...
	I0914 22:53:05.914276   46713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:53:05.917292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:05.917310   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.917315   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.917319   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.917325   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.917331   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.917343   46713 retry.go:31] will retry after 277.910308ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.201147   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.201170   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.201175   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.201179   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.201185   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.201191   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.201205   46713 retry.go:31] will retry after 262.96693ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.470372   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.470410   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.470418   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.470425   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.470435   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.470446   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.470481   46713 retry.go:31] will retry after 486.428451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.961666   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.961693   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.961700   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.961706   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.961716   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.961724   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.961740   46713 retry.go:31] will retry after 524.467148ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:07.491292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:07.491315   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:07.491321   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:07.491325   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:07.491331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:07.491337   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:07.491370   46713 retry.go:31] will retry after 567.308028ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.063587   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.063612   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.063618   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.063622   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.063629   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.063635   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.063649   46713 retry.go:31] will retry after 723.150919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.791530   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.791561   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.791571   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.791578   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.791588   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.791597   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.791616   46713 retry.go:31] will retry after 1.173741151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:09.971866   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:09.971895   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:09.971903   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:09.971909   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:09.971919   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:09.971928   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:09.971946   46713 retry.go:31] will retry after 1.046713916s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:11.024191   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:11.024220   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:11.024226   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:11.024231   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:11.024238   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:11.024244   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:11.024260   46713 retry.go:31] will retry after 1.531910243s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:12.562517   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:12.562555   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:12.562564   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:12.562573   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:12.562584   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:12.562594   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:12.562612   46713 retry.go:31] will retry after 2.000243773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:14.570247   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:14.570284   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:14.570294   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:14.570303   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:14.570320   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:14.570329   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:14.570346   46713 retry.go:31] will retry after 2.095330784s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:16.670345   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:16.670372   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:16.670377   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:16.670382   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:16.670394   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:16.670401   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:16.670416   46713 retry.go:31] will retry after 2.811644755s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:19.488311   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:19.488339   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:19.488344   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:19.488348   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:19.488354   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:19.488362   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:19.488380   46713 retry.go:31] will retry after 3.274452692s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:22.768417   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:22.768446   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:22.768454   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:22.768461   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:22.768471   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:22.768481   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:22.768499   46713 retry.go:31] will retry after 5.52037196s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:28.294932   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:28.294958   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:28.294964   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:28.294967   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:28.294975   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:28.294980   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:28.294994   46713 retry.go:31] will retry after 4.305647383s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:32.605867   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:32.605894   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:32.605900   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:32.605903   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:32.605910   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:32.605915   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:32.605929   46713 retry.go:31] will retry after 8.214918081s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:40.825284   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:40.825314   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:40.825319   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:40.825324   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:40.825331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:40.825336   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:40.825352   46713 retry.go:31] will retry after 10.5220598s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:51.353809   46713 system_pods.go:86] 7 kube-system pods found
	I0914 22:53:51.353844   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:51.353851   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:51.353856   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Pending
	I0914 22:53:51.353862   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:51.353868   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Pending
	I0914 22:53:51.353878   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:51.353887   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:51.353907   46713 retry.go:31] will retry after 10.482387504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:54:01.842876   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:01.842900   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:01.842905   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:01.842909   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Pending
	I0914 22:54:01.842914   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:01.842918   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Pending
	I0914 22:54:01.842921   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:01.842925   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:01.842931   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:01.842937   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:01.842950   46713 retry.go:31] will retry after 14.535469931s: missing components: etcd, kube-controller-manager
	I0914 22:54:16.384703   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:16.384732   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:16.384738   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:16.384742   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Running
	I0914 22:54:16.384747   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:16.384751   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Running
	I0914 22:54:16.384754   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:16.384758   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:16.384766   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:16.384773   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:16.384782   46713 system_pods.go:126] duration metric: took 1m10.470499333s to wait for k8s-apps to be running ...
	I0914 22:54:16.384791   46713 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:54:16.384849   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:54:16.409329   46713 system_svc.go:56] duration metric: took 24.530447ms WaitForService to wait for kubelet.
	I0914 22:54:16.409359   46713 kubeadm.go:581] duration metric: took 1m23.139238057s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:54:16.409385   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:54:16.412461   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:54:16.412490   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:54:16.412505   46713 node_conditions.go:105] duration metric: took 3.107771ms to run NodePressure ...
	I0914 22:54:16.412519   46713 start.go:228] waiting for startup goroutines ...
	I0914 22:54:16.412529   46713 start.go:233] waiting for cluster config update ...
	I0914 22:54:16.412546   46713 start.go:242] writing updated cluster config ...
	I0914 22:54:16.412870   46713 ssh_runner.go:195] Run: rm -f paused
	I0914 22:54:16.460181   46713 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I0914 22:54:16.461844   46713 out.go:177] 
	W0914 22:54:16.463221   46713 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0914 22:54:16.464486   46713 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0914 22:54:16.465912   46713 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-930717" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:47:15 UTC, ends at Thu 2023-09-14 23:06:58 UTC. --
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.617697083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=34ccc20f-fbd9-4c4a-ab2d-7849ae04421c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.617805951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=34ccc20f-fbd9-4c4a-ab2d-7849ae04421c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.618243213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=34ccc20f-fbd9-4c4a-ab2d-7849ae04421c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.650340918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=66e26621-483d-4d28-b56a-1cfb40c7cee3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.650456977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=66e26621-483d-4d28-b56a-1cfb40c7cee3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.650733995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=66e26621-483d-4d28-b56a-1cfb40c7cee3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.681971501Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=704dc121-8b66-4476-b888-43d1c2192681 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.682329604Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&PodSandboxMetadata{Name:busybox,Uid:608ce466-af8d-4d2f-b38f-dabc477f308b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731688368911983,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:48:00.641977405Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-rntdg,Uid:26064ba4-be5d-45b8-bc54-9af74efb4b1c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:16947316883515059
28,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:48:00.641979191Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61eb0f2fc6132c9f26c62ad64607b0b06e2adf45e6796c274fadcfbcbbf19457,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-swnnf,Uid:4b0db27e-c36f-452e-8ed5-57027bf9ab99,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731684755233549,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-swnnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0db27e-c36f-452e-8ed5-57027bf9ab99,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:48:00.6
41974229Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:dafe9e6f-dd6b-4003-9728-d5b0aec14091,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731680995651143,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-14T22:48:00.641975763Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-zzkbp,Uid:1d3cfe91-a904-4c1a-834d-261806db97c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731680974620428,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a904-4c1a-834d-261806db97c0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2023-09-14T22:48:00.641963413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-344363,Uid:b28c6d3777c835bf9bf207455b86d887,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731675211147264,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.60:8443,kubernetes.io/config.hash: b28c6d3777c835bf9bf207455b86d887,kubernetes.io/config.seen: 2023-09-14T22:47:54.641866134Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&PodSandboxMetadata{Na
me:etcd-no-preload-344363,Uid:36c8ca1c24ef4f03d635561ab899c4d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731675205665885,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.60:2379,kubernetes.io/config.hash: 36c8ca1c24ef4f03d635561ab899c4d0,kubernetes.io/config.seen: 2023-09-14T22:47:54.641865267Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-344363,Uid:7b8e634c7fe8efa81d10e65af8d91cb4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731675192891086,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7b8e634c7fe8efa81d10e65af8d91cb4,kubernetes.io/config.seen: 2023-09-14T22:47:54.641860577Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-344363,Uid:a2a7dffe6dea61ab94b848f785eccb01,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731675158868196,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b848f785eccb01,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a2a7dffe6dea61ab94b848f785eccb01,kube
rnetes.io/config.seen: 2023-09-14T22:47:54.641864372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=704dc121-8b66-4476-b888-43d1c2192681 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.682972862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6a97f80f-b981-4a98-97e9-23e49628190e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.683045041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6a97f80f-b981-4a98-97e9-23e49628190e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.683318595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6a97f80f-b981-4a98-97e9-23e49628190e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.691295432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9a86cfcd-3e1a-4d6b-ac8e-5e7d4135306e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.691393835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9a86cfcd-3e1a-4d6b-ac8e-5e7d4135306e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.691638869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9a86cfcd-3e1a-4d6b-ac8e-5e7d4135306e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.717483215Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e1487cf4-da26-43cb-8b54-1c4ab03fc9a7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.717728175Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&PodSandboxMetadata{Name:busybox,Uid:608ce466-af8d-4d2f-b38f-dabc477f308b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731688368911983,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:48:00.641977405Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-rntdg,Uid:26064ba4-be5d-45b8-bc54-9af74efb4b1c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:16947316883515059
28,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:48:00.641979191Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61eb0f2fc6132c9f26c62ad64607b0b06e2adf45e6796c274fadcfbcbbf19457,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-swnnf,Uid:4b0db27e-c36f-452e-8ed5-57027bf9ab99,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731684755233549,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-swnnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0db27e-c36f-452e-8ed5-57027bf9ab99,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-14T22:48:00.6
41974229Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:dafe9e6f-dd6b-4003-9728-d5b0aec14091,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731680995651143,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-14T22:48:00.641975763Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-zzkbp,Uid:1d3cfe91-a904-4c1a-834d-261806db97c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731680974620428,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a904-4c1a-834d-261806db97c0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2023-09-14T22:48:00.641963413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-344363,Uid:b28c6d3777c835bf9bf207455b86d887,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731675211147264,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.60:8443,kubernetes.io/config.hash: b28c6d3777c835bf9bf207455b86d887,kubernetes.io/config.seen: 2023-09-14T22:47:54.641866134Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&PodSandboxMetadata{Na
me:etcd-no-preload-344363,Uid:36c8ca1c24ef4f03d635561ab899c4d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731675205665885,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.60:2379,kubernetes.io/config.hash: 36c8ca1c24ef4f03d635561ab899c4d0,kubernetes.io/config.seen: 2023-09-14T22:47:54.641865267Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-344363,Uid:7b8e634c7fe8efa81d10e65af8d91cb4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731675192891086,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7b8e634c7fe8efa81d10e65af8d91cb4,kubernetes.io/config.seen: 2023-09-14T22:47:54.641860577Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-344363,Uid:a2a7dffe6dea61ab94b848f785eccb01,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694731675158868196,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b848f785eccb01,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a2a7dffe6dea61ab94b848f785eccb01,kube
rnetes.io/config.seen: 2023-09-14T22:47:54.641864372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=e1487cf4-da26-43cb-8b54-1c4ab03fc9a7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.718744967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=702758a1-116b-4c7b-ba4e-b0cb98ee49f2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.718842043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=702758a1-116b-4c7b-ba4e-b0cb98ee49f2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.719038791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab9
4b848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=702758a1-116b-4c7b-ba4e-b0cb98ee49f2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.739236695Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9028a633-bd8b-4b77-8a79-cd6a437b39cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.739297941Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9028a633-bd8b-4b77-8a79-cd6a437b39cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.739493441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9028a633-bd8b-4b77-8a79-cd6a437b39cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.770350548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2cdf0a9a-c989-438c-8b4f-48b51c927e7f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.770422313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2cdf0a9a-c989-438c-8b4f-48b51c927e7f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:58 no-preload-344363 crio[722]: time="2023-09-14 23:06:58.770686351Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694731711899242852,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97022350cc3ee095ff46d48476a74af84fa3ce8dd0fe6e374d4e5def14e4ee0e,PodSandboxId:6f3da613ffbe949c53a8c35ef50f7bb4e5a3a387e723f74cddaaea07ab656d23,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694731691688157775,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 608ce466-af8d-4d2f-b38f-dabc477f308b,},Annotations:map[string]string{io.kubernetes.container.hash: 5597041d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a,PodSandboxId:dc7ce60e4ea6bc731a7092a6ead37237d3cdf42b85a416593d3821ce9a11d0c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694731689128942952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rntdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26064ba4-be5d-45b8-bc54-9af74efb4b1c,},Annotations:map[string]string{io.kubernetes.container.hash: 88e8d8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1,PodSandboxId:194b6c7a64b01f44980da0ca25d92d7ad3f709432bd8f171cd89b264f375b9e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694731681444463018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3cfe91-a
904-4c1a-834d-261806db97c0,},Annotations:map[string]string{io.kubernetes.container.hash: 97e5fca9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669,PodSandboxId:48e581734bb7158b6b6a6a4a25db54b4ab2b68ddce17062d450011fc984c0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1694731681364254307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafe9e6f-dd6
b-4003-9728-d5b0aec14091,},Annotations:map[string]string{io.kubernetes.container.hash: 36578bfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566,PodSandboxId:8bc4a7d7f02be8f1d90d9c5e69d9620c7070534f2a2b4c2789254b540815c338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694731676121431696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7dffe6dea61ab94b
848f785eccb01,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2,PodSandboxId:2314bbd92316dc1589dae6e3f90f3972f1b007857d82aff9b42d2c3a908d8df2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694731676203090791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 7b8e634c7fe8efa81d10e65af8d91cb4,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38,PodSandboxId:3a1835e7397449ba0ddceaa3e7561d055ba4a3ba753a9e0910135b875ea0e84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694731676064767075,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36c8ca1c24ef4f03d635561ab899c4d0,},Annotation
s:map[string]string{io.kubernetes.container.hash: 90952d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043,PodSandboxId:b588cc7554b07746d82d0613b281e742d14446b8f415a95ef28fbd113853e6a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694731675618083540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-344363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b28c6d3777c835bf9bf207455b86d887,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6ef34d76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2cdf0a9a-c989-438c-8b4f-48b51c927e7f name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	0d6da8266a65b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   48e581734bb71
	97022350cc3ee       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   6f3da613ffbe9
	8a06ddba66f0a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      18 minutes ago      Running             coredns                   1                   dc7ce60e4ea6b
	eb1a03278a771       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      18 minutes ago      Running             kube-proxy                1                   194b6c7a64b01
	a554481de89e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       1                   48e581734bb71
	d670d4deec4bc       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      19 minutes ago      Running             kube-controller-manager   1                   2314bbd92316d
	6fa0d09d74d54       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      19 minutes ago      Running             kube-scheduler            1                   8bc4a7d7f02be
	db7177e981567       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      19 minutes ago      Running             etcd                      1                   3a1835e739744
	33222eae96b0a       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      19 minutes ago      Running             kube-apiserver            1                   b588cc7554b07
	
	* 
	* ==> coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56337 - 12331 "HINFO IN 315502276035198041.3823794961810864963. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015342469s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-344363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-344363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=no-preload-344363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_38_24_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:38:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-344363
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 23:06:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 23:03:50 +0000   Thu, 14 Sep 2023 22:38:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 23:03:50 +0000   Thu, 14 Sep 2023 22:38:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 23:03:50 +0000   Thu, 14 Sep 2023 22:38:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 23:03:50 +0000   Thu, 14 Sep 2023 22:48:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    no-preload-344363
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8881348dd73843818e568e820cb8ced5
	  System UUID:                8881348d-d738-4381-8e56-8e820cb8ced5
	  Boot ID:                    3315b2a3-ec47-4527-946a-63c262d71b01
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-rntdg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-344363                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-344363             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-344363    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-zzkbp                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-344363             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-swnnf              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-344363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-344363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-344363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-344363 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-344363 event: Registered Node no-preload-344363 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-344363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-344363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-344363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-344363 event: Registered Node no-preload-344363 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep14 22:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.079398] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.601158] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.857565] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135211] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.461894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.344272] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.116087] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.145073] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.117210] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.241417] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[ +31.042084] systemd-fstab-generator[1226]: Ignoring "noauto" for root device
	[Sep14 22:48] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] <==
	* {"level":"info","ts":"2023-09-14T22:47:57.524388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:47:57.524433Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:47:57.524219Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T22:47:58.685286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-14T22:47:58.685427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-14T22:47:58.685486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgPreVoteResp from 1a622f206f99396a at term 2"}
	{"level":"info","ts":"2023-09-14T22:47:58.685526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became candidate at term 3"}
	{"level":"info","ts":"2023-09-14T22:47:58.685558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgVoteResp from 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2023-09-14T22:47:58.685588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became leader at term 3"}
	{"level":"info","ts":"2023-09-14T22:47:58.685616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a622f206f99396a elected leader 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2023-09-14T22:47:58.694429Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1a622f206f99396a","local-member-attributes":"{Name:no-preload-344363 ClientURLs:[https://192.168.39.60:2379]}","request-path":"/0/members/1a622f206f99396a/attributes","cluster-id":"94dd135126e1e7b0","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:47:58.695307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:47:58.696395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:47:58.696552Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:47:58.713495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.60:2379"}
	{"level":"info","ts":"2023-09-14T22:47:58.722612Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:47:58.722654Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-09-14T22:48:00.574704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.196116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/no-preload-344363\" ","response":"range_response_count:1 size:691"}
	{"level":"info","ts":"2023-09-14T22:48:00.574891Z","caller":"traceutil/trace.go:171","msg":"trace[462883843] range","detail":"{range_begin:/registry/csinodes/no-preload-344363; range_end:; response_count:1; response_revision:420; }","duration":"100.382326ms","start":"2023-09-14T22:48:00.474484Z","end":"2023-09-14T22:48:00.574866Z","steps":["trace[462883843] 'agreement among raft nodes before linearized reading'  (duration: 98.904758ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:57:58.811718Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":771}
	{"level":"info","ts":"2023-09-14T22:57:58.814784Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":771,"took":"2.694983ms","hash":1002913574}
	{"level":"info","ts":"2023-09-14T22:57:58.814848Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1002913574,"revision":771,"compact-revision":-1}
	{"level":"info","ts":"2023-09-14T23:02:58.820711Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1013}
	{"level":"info","ts":"2023-09-14T23:02:58.822506Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1013,"took":"1.457915ms","hash":1570280830}
	{"level":"info","ts":"2023-09-14T23:02:58.822565Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1570280830,"revision":1013,"compact-revision":771}
	
	* 
	* ==> kernel <==
	*  23:06:59 up 19 min,  0 users,  load average: 0.03, 0.08, 0.09
	Linux no-preload-344363 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] <==
	* I0914 23:03:01.479557       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 23:03:01.479620       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:03:01.479733       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:03:01.480965       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:04:00.377799       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.218.65:443: connect: connection refused
	I0914 23:04:00.377873       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 23:04:01.480293       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:04:01.481091       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 23:04:01.481148       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 23:04:01.481439       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:04:01.481628       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:04:01.482853       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:05:00.377413       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.218.65:443: connect: connection refused
	I0914 23:05:00.377587       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 23:06:00.377898       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.218.65:443: connect: connection refused
	I0914 23:06:00.378099       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0914 23:06:01.482503       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:06:01.482686       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0914 23:06:01.482741       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 23:06:01.483639       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 23:06:01.483760       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:06:01.483794       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] <==
	* I0914 23:01:13.618418       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:01:43.116144       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:01:43.628100       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:02:13.122822       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:02:13.636497       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:02:43.128887       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:02:43.644915       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:03:13.135012       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:03:13.653711       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:03:43.141508       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:03:43.663721       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 23:04:12.735798       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="282.708µs"
	E0914 23:04:13.147851       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:04:13.673365       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 23:04:23.733685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="88.369µs"
	E0914 23:04:43.154421       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:04:43.683654       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:05:13.160877       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:05:13.694561       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:05:43.167108       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:05:43.706338       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:06:13.173291       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:06:13.716568       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 23:06:43.178325       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0914 23:06:43.726423       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] <==
	* I0914 22:48:01.863468       1 server_others.go:69] "Using iptables proxy"
	I0914 22:48:01.886884       1 node.go:141] Successfully retrieved node IP: 192.168.39.60
	I0914 22:48:01.931005       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 22:48:01.931061       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 22:48:01.933747       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:48:01.934509       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:48:01.934914       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:48:01.934967       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:48:01.937360       1 config.go:188] "Starting service config controller"
	I0914 22:48:01.937967       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:48:01.938037       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:48:01.938065       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:48:01.940521       1 config.go:315] "Starting node config controller"
	I0914 22:48:01.940573       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:48:02.038468       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 22:48:02.038489       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:48:02.040693       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] <==
	* I0914 22:47:58.555719       1 serving.go:348] Generated self-signed cert in-memory
	I0914 22:48:00.499895       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 22:48:00.500014       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:48:00.582485       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0914 22:48:00.582571       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0914 22:48:00.582796       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 22:48:00.582896       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:48:00.582949       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0914 22:48:00.582981       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0914 22:48:00.585567       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 22:48:00.585698       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 22:48:00.684423       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0914 22:48:00.684604       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:48:00.689288       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:47:15 UTC, ends at Thu 2023-09-14 23:06:59 UTC. --
	Sep 14 23:04:12 no-preload-344363 kubelet[1232]: E0914 23:04:12.718318    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:04:23 no-preload-344363 kubelet[1232]: E0914 23:04:23.716984    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:04:34 no-preload-344363 kubelet[1232]: E0914 23:04:34.717089    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:04:46 no-preload-344363 kubelet[1232]: E0914 23:04:46.717795    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:04:54 no-preload-344363 kubelet[1232]: E0914 23:04:54.833545    1232 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 23:04:54 no-preload-344363 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:04:54 no-preload-344363 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:04:54 no-preload-344363 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:04:59 no-preload-344363 kubelet[1232]: E0914 23:04:59.716804    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:05:12 no-preload-344363 kubelet[1232]: E0914 23:05:12.717358    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:05:23 no-preload-344363 kubelet[1232]: E0914 23:05:23.717057    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:05:37 no-preload-344363 kubelet[1232]: E0914 23:05:37.717325    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:05:50 no-preload-344363 kubelet[1232]: E0914 23:05:50.717953    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:05:54 no-preload-344363 kubelet[1232]: E0914 23:05:54.834151    1232 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 23:05:54 no-preload-344363 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:05:54 no-preload-344363 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:05:54 no-preload-344363 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 23:06:05 no-preload-344363 kubelet[1232]: E0914 23:06:05.717039    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:06:18 no-preload-344363 kubelet[1232]: E0914 23:06:18.716378    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:06:32 no-preload-344363 kubelet[1232]: E0914 23:06:32.719993    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:06:46 no-preload-344363 kubelet[1232]: E0914 23:06:46.717388    1232 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-swnnf" podUID="4b0db27e-c36f-452e-8ed5-57027bf9ab99"
	Sep 14 23:06:54 no-preload-344363 kubelet[1232]: E0914 23:06:54.833442    1232 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 14 23:06:54 no-preload-344363 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 23:06:54 no-preload-344363 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 23:06:54 no-preload-344363 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] <==
	* I0914 22:48:32.007675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:48:32.019963       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:48:32.020076       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:48:49.422811       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:48:49.423275       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-344363_8d3ecd0d-6913-482e-9050-e4f8e3b81f4a!
	I0914 22:48:49.423376       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d18b0ddf-5cd9-4d5d-8650-5ce9016e413a", APIVersion:"v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-344363_8d3ecd0d-6913-482e-9050-e4f8e3b81f4a became leader
	I0914 22:48:49.524535       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-344363_8d3ecd0d-6913-482e-9050-e4f8e3b81f4a!
	
	* 
	* ==> storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] <==
	* I0914 22:48:01.631789       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 22:48:31.634337       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-344363 -n no-preload-344363
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-344363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-swnnf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-344363 describe pod metrics-server-57f55c9bc5-swnnf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-344363 describe pod metrics-server-57f55c9bc5-swnnf: exit status 1 (88.739589ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-swnnf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-344363 describe pod metrics-server-57f55c9bc5-swnnf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (329.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (219.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0914 23:03:32.188346   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 23:04:29.764699   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 23:06:36.475148   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-930717 -n old-k8s-version-930717
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-14 23:06:55.770952767 +0000 UTC m=+5437.965294394
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-930717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-930717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.5µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-930717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930717 -n old-k8s-version-930717
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-930717 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-930717 logs -n 25: (1.599069742s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-711912                           | kubernetes-upgrade-711912    | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:36 UTC |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:36 UTC | 14 Sep 23 22:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-344363             | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC | 14 Sep 23 22:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-631227                              | cert-expiration-631227       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:39 UTC |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC | 14 Sep 23 22:40 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799144  | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC |                     |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-948459                              | stopped-upgrade-948459       | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:40 UTC |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:40 UTC | 14 Sep 23 22:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-344363                  | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-344363                                   | no-preload-344363            | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-588699            | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC | 14 Sep 23 22:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799144       | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-930717        | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799144 | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC | 14 Sep 23 22:51 UTC |
	|         | default-k8s-diff-port-799144                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-588699                 | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-588699                                  | embed-certs-588699           | jenkins | v1.31.2 | 14 Sep 23 22:44 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-930717             | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-930717                              | old-k8s-version-930717       | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC | 14 Sep 23 22:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:45:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:45:20.513575   46713 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:45:20.513835   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.513847   46713 out.go:309] Setting ErrFile to fd 2...
	I0914 22:45:20.513852   46713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:45:20.514030   46713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:45:20.514571   46713 out.go:303] Setting JSON to false
	I0914 22:45:20.515550   46713 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5263,"bootTime":1694726258,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:45:20.515607   46713 start.go:138] virtualization: kvm guest
	I0914 22:45:20.517738   46713 out.go:177] * [old-k8s-version-930717] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:45:20.519301   46713 notify.go:220] Checking for updates...
	I0914 22:45:20.519309   46713 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:45:20.520886   46713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:45:20.522525   46713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:45:20.524172   46713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:45:20.525826   46713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:45:20.527204   46713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:45:20.529068   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:45:20.529489   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.529542   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.548088   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0914 22:45:20.548488   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.548969   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.548985   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.549404   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.549555   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.551507   46713 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 22:45:20.552878   46713 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:45:20.553145   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:45:20.553176   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:45:20.566825   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0914 22:45:20.567181   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:45:20.567617   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:45:20.567646   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:45:20.568018   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:45:20.568195   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:45:20.601886   46713 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 22:45:20.603176   46713 start.go:298] selected driver: kvm2
	I0914 22:45:20.603188   46713 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.603284   46713 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:45:20.603926   46713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.603997   46713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 22:45:20.617678   46713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 22:45:20.618009   46713 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:45:20.618045   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:45:20.618062   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:45:20.618075   46713 start_flags.go:321] config:
	{Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:45:20.618204   46713 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:45:20.619892   46713 out.go:177] * Starting control plane node old-k8s-version-930717 in cluster old-k8s-version-930717
	I0914 22:45:22.939748   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:20.621146   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:45:20.621171   46713 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0914 22:45:20.621184   46713 cache.go:57] Caching tarball of preloaded images
	I0914 22:45:20.621265   46713 preload.go:174] Found /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 22:45:20.621286   46713 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0914 22:45:20.621381   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:45:20.621551   46713 start.go:365] acquiring machines lock for old-k8s-version-930717: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:45:29.019730   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:32.091705   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:38.171724   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:41.243661   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:47.323733   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:50.395751   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:56.475703   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:45:59.547782   45407 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.60:22: connect: no route to host
	I0914 22:46:02.551591   45954 start.go:369] acquired machines lock for "default-k8s-diff-port-799144" in 3m15.018428257s
	I0914 22:46:02.551631   45954 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:02.551642   45954 fix.go:54] fixHost starting: 
	I0914 22:46:02.551944   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:02.551972   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:02.566520   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0914 22:46:02.566922   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:02.567373   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:02.567392   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:02.567734   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:02.567961   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:02.568128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:02.569692   45954 fix.go:102] recreateIfNeeded on default-k8s-diff-port-799144: state=Stopped err=<nil>
	I0914 22:46:02.569714   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	W0914 22:46:02.569887   45954 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:02.571684   45954 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799144" ...
	I0914 22:46:02.549458   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:02.549490   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:46:02.551419   45407 machine.go:91] provisioned docker machine in 4m37.435317847s
	I0914 22:46:02.551457   45407 fix.go:56] fixHost completed within 4m37.455553972s
	I0914 22:46:02.551462   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 4m37.455581515s
	W0914 22:46:02.551502   45407 start.go:688] error starting host: provision: host is not running
	W0914 22:46:02.551586   45407 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0914 22:46:02.551600   45407 start.go:703] Will try again in 5 seconds ...
	I0914 22:46:02.573354   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Start
	I0914 22:46:02.573535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring networks are active...
	I0914 22:46:02.574326   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network default is active
	I0914 22:46:02.574644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Ensuring network mk-default-k8s-diff-port-799144 is active
	I0914 22:46:02.575046   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Getting domain xml...
	I0914 22:46:02.575767   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Creating domain...
	I0914 22:46:03.792613   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting to get IP...
	I0914 22:46:03.793573   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.793932   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:03.794029   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:03.793928   46868 retry.go:31] will retry after 250.767464ms: waiting for machine to come up
	I0914 22:46:04.046447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.046928   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.046853   46868 retry.go:31] will retry after 320.29371ms: waiting for machine to come up
	I0914 22:46:04.368383   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368782   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.368814   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.368726   46868 retry.go:31] will retry after 295.479496ms: waiting for machine to come up
	I0914 22:46:04.666192   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666655   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:04.666680   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:04.666595   46868 retry.go:31] will retry after 572.033699ms: waiting for machine to come up
	I0914 22:46:05.240496   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240920   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.240953   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.240872   46868 retry.go:31] will retry after 493.557238ms: waiting for machine to come up
	I0914 22:46:05.735682   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736201   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:05.736245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:05.736150   46868 retry.go:31] will retry after 848.645524ms: waiting for machine to come up
	I0914 22:46:06.586116   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:06.586568   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:06.586473   46868 retry.go:31] will retry after 866.110647ms: waiting for machine to come up
	I0914 22:46:07.553803   45407 start.go:365] acquiring machines lock for no-preload-344363: {Name:mk924d76c2d05995311cfed715d94405211b8bbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 22:46:07.454431   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454798   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:07.454827   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:07.454743   46868 retry.go:31] will retry after 1.485337575s: waiting for machine to come up
	I0914 22:46:08.941761   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942136   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:08.942177   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:08.942104   46868 retry.go:31] will retry after 1.640651684s: waiting for machine to come up
	I0914 22:46:10.584576   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584905   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:10.584939   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:10.584838   46868 retry.go:31] will retry after 1.656716681s: waiting for machine to come up
	I0914 22:46:12.243599   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244096   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:12.244119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:12.244037   46868 retry.go:31] will retry after 2.692733224s: waiting for machine to come up
	I0914 22:46:14.939726   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940035   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:14.940064   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:14.939986   46868 retry.go:31] will retry after 2.745837942s: waiting for machine to come up
	I0914 22:46:22.180177   46412 start.go:369] acquired machines lock for "embed-certs-588699" in 2m3.238409394s
	I0914 22:46:22.180244   46412 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:22.180256   46412 fix.go:54] fixHost starting: 
	I0914 22:46:22.180661   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:22.180706   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:22.196558   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0914 22:46:22.196900   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:22.197304   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:46:22.197326   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:22.197618   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:22.197808   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:22.197986   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:46:22.199388   46412 fix.go:102] recreateIfNeeded on embed-certs-588699: state=Stopped err=<nil>
	I0914 22:46:22.199423   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	W0914 22:46:22.199595   46412 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:22.202757   46412 out.go:177] * Restarting existing kvm2 VM for "embed-certs-588699" ...
	I0914 22:46:17.687397   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687911   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | unable to find current IP address of domain default-k8s-diff-port-799144 in network mk-default-k8s-diff-port-799144
	I0914 22:46:17.687937   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | I0914 22:46:17.687878   46868 retry.go:31] will retry after 3.174192278s: waiting for machine to come up
	I0914 22:46:20.866173   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866687   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Found IP for machine: 192.168.50.175
	I0914 22:46:20.866722   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has current primary IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.866737   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserving static IP address...
	I0914 22:46:20.867209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.867245   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | skip adding static IP to network mk-default-k8s-diff-port-799144 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799144", mac: "52:54:00:ee:44:c7", ip: "192.168.50.175"}
	I0914 22:46:20.867263   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Reserved static IP address: 192.168.50.175
	I0914 22:46:20.867290   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Waiting for SSH to be available...
	I0914 22:46:20.867303   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Getting to WaitForSSH function...
	I0914 22:46:20.869597   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.869960   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.869993   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.870103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH client type: external
	I0914 22:46:20.870137   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa (-rw-------)
	I0914 22:46:20.870193   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:20.870218   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | About to run SSH command:
	I0914 22:46:20.870237   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | exit 0
	I0914 22:46:20.959125   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:20.959456   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetConfigRaw
	I0914 22:46:20.960082   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:20.962512   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.962889   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.962915   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.963114   45954 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/config.json ...
	I0914 22:46:20.963282   45954 machine.go:88] provisioning docker machine ...
	I0914 22:46:20.963300   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:20.963509   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963682   45954 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799144"
	I0914 22:46:20.963709   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:20.963899   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:20.966359   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966728   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:20.966757   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:20.966956   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:20.967146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967287   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:20.967420   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:20.967584   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:20.967963   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:20.967983   45954 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799144 && echo "default-k8s-diff-port-799144" | sudo tee /etc/hostname
	I0914 22:46:21.098114   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799144
	
	I0914 22:46:21.098158   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.100804   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101167   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.101208   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.101332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.101532   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.101855   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.102028   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.102386   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.102406   45954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799144/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:21.225929   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:21.225964   45954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:21.225992   45954 buildroot.go:174] setting up certificates
	I0914 22:46:21.226007   45954 provision.go:83] configureAuth start
	I0914 22:46:21.226023   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetMachineName
	I0914 22:46:21.226299   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:21.229126   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229514   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.229555   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.229644   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.231683   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.231992   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.232027   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.232179   45954 provision.go:138] copyHostCerts
	I0914 22:46:21.232233   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:21.232247   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:21.232321   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:21.232412   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:21.232421   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:21.232446   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:21.232542   45954 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:21.232551   45954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:21.232572   45954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:21.232617   45954 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799144 san=[192.168.50.175 192.168.50.175 localhost 127.0.0.1 minikube default-k8s-diff-port-799144]
	I0914 22:46:21.489180   45954 provision.go:172] copyRemoteCerts
	I0914 22:46:21.489234   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:21.489257   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.491989   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492308   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.492334   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.492535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.492734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.492869   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.493038   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:21.579991   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0914 22:46:21.599819   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:21.619391   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:21.638607   45954 provision.go:86] duration metric: configureAuth took 412.585328ms
	I0914 22:46:21.638629   45954 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:21.638797   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:21.638867   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.641693   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642033   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.642067   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.642209   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.642399   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642562   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.642734   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.642900   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:21.643239   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:21.643257   45954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:21.928913   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:21.928940   45954 machine.go:91] provisioned docker machine in 965.645328ms
	I0914 22:46:21.928952   45954 start.go:300] post-start starting for "default-k8s-diff-port-799144" (driver="kvm2")
	I0914 22:46:21.928964   45954 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:21.928987   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:21.929377   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:21.929425   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:21.931979   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932350   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:21.932388   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:21.932475   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:21.932704   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:21.932923   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:21.933059   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.020329   45954 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:22.024444   45954 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:22.024458   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:22.024513   45954 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:22.024589   45954 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:22.024672   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:22.033456   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:22.054409   45954 start.go:303] post-start completed in 125.445528ms
	I0914 22:46:22.054427   45954 fix.go:56] fixHost completed within 19.502785226s
	I0914 22:46:22.054444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.057353   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057690   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.057721   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.057925   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.058139   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058304   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.058483   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.058657   45954 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:22.059051   45954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.175 22 <nil> <nil>}
	I0914 22:46:22.059065   45954 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:22.180023   45954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731582.133636857
	
	I0914 22:46:22.180044   45954 fix.go:206] guest clock: 1694731582.133636857
	I0914 22:46:22.180054   45954 fix.go:219] Guest: 2023-09-14 22:46:22.133636857 +0000 UTC Remote: 2023-09-14 22:46:22.054430307 +0000 UTC m=+214.661061156 (delta=79.20655ms)
	I0914 22:46:22.180078   45954 fix.go:190] guest clock delta is within tolerance: 79.20655ms
	I0914 22:46:22.180084   45954 start.go:83] releasing machines lock for "default-k8s-diff-port-799144", held for 19.628473828s
	I0914 22:46:22.180114   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.180408   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:22.183182   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183507   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.183543   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.183675   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184175   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184384   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:22.184494   45954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:22.184535   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.184627   45954 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:22.184662   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:22.187447   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187604   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187813   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.187839   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.187971   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.187986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:22.188024   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:22.188151   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:22.188153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188344   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:22.188391   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188500   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.188519   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:22.188618   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:22.303009   45954 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:22.308185   45954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:22.450504   45954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:22.455642   45954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:22.455700   45954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:22.468430   45954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:22.468453   45954 start.go:469] detecting cgroup driver to use...
	I0914 22:46:22.468509   45954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:22.483524   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:22.494650   45954 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:22.494706   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:22.506589   45954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:22.518370   45954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:22.619545   45954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:22.737486   45954 docker.go:212] disabling docker service ...
	I0914 22:46:22.737551   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:22.749267   45954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:22.759866   45954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:22.868561   45954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:22.973780   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:22.986336   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:23.004987   45954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:23.005042   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.013821   45954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:23.013889   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.022487   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.030875   45954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:23.038964   45954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:23.047246   45954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:23.054339   45954 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:23.054379   45954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:23.066649   45954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:23.077024   45954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:23.174635   45954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:23.337031   45954 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:23.337113   45954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:23.342241   45954 start.go:537] Will wait 60s for crictl version
	I0914 22:46:23.342308   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:46:23.345832   45954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:23.377347   45954 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:23.377433   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.425559   45954 ssh_runner.go:195] Run: crio --version
	I0914 22:46:23.492770   45954 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:22.203936   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Start
	I0914 22:46:22.204098   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring networks are active...
	I0914 22:46:22.204740   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network default is active
	I0914 22:46:22.205158   46412 main.go:141] libmachine: (embed-certs-588699) Ensuring network mk-embed-certs-588699 is active
	I0914 22:46:22.205524   46412 main.go:141] libmachine: (embed-certs-588699) Getting domain xml...
	I0914 22:46:22.206216   46412 main.go:141] libmachine: (embed-certs-588699) Creating domain...
	I0914 22:46:23.529479   46412 main.go:141] libmachine: (embed-certs-588699) Waiting to get IP...
	I0914 22:46:23.530274   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.530639   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.530694   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.530608   46986 retry.go:31] will retry after 299.617651ms: waiting for machine to come up
	I0914 22:46:23.494065   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetIP
	I0914 22:46:23.496974   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497458   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:23.497490   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:23.497694   45954 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:23.501920   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:23.517500   45954 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:23.517542   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:23.554344   45954 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:23.554403   45954 ssh_runner.go:195] Run: which lz4
	I0914 22:46:23.558745   45954 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:23.563443   45954 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:23.563488   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:25.365372   45954 crio.go:444] Took 1.806660 seconds to copy over tarball
	I0914 22:46:25.365442   45954 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:23.832332   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:23.833457   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:23.833488   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:23.832911   46986 retry.go:31] will retry after 315.838121ms: waiting for machine to come up
	I0914 22:46:24.150532   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.150980   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.151009   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.150942   46986 retry.go:31] will retry after 369.928332ms: waiting for machine to come up
	I0914 22:46:24.522720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:24.523232   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:24.523257   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:24.523145   46986 retry.go:31] will retry after 533.396933ms: waiting for machine to come up
	I0914 22:46:25.057818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.058371   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.058405   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.058318   46986 retry.go:31] will retry after 747.798377ms: waiting for machine to come up
	I0914 22:46:25.807422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:25.807912   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:25.807956   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:25.807874   46986 retry.go:31] will retry after 947.037376ms: waiting for machine to come up
	I0914 22:46:26.756214   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:26.756720   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:26.756757   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:26.756689   46986 retry.go:31] will retry after 1.117164865s: waiting for machine to come up
	I0914 22:46:27.875432   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:27.875931   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:27.875953   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:27.875886   46986 retry.go:31] will retry after 1.117181084s: waiting for machine to come up
	I0914 22:46:28.197684   45954 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.832216899s)
	I0914 22:46:28.197710   45954 crio.go:451] Took 2.832313 seconds to extract the tarball
	I0914 22:46:28.197718   45954 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:28.236545   45954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:28.286349   45954 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:28.286374   45954 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:28.286449   45954 ssh_runner.go:195] Run: crio config
	I0914 22:46:28.344205   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:28.344231   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:28.344253   45954 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:28.344289   45954 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.175 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799144 NodeName:default-k8s-diff-port-799144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:28.344454   45954 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.175
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799144"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:28.344536   45954 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-799144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0914 22:46:28.344591   45954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:28.354383   45954 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:28.354459   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:28.363277   45954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0914 22:46:28.378875   45954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:28.393535   45954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0914 22:46:28.408319   45954 ssh_runner.go:195] Run: grep 192.168.50.175	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:28.411497   45954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:28.421507   45954 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144 for IP: 192.168.50.175
	I0914 22:46:28.421536   45954 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:28.421702   45954 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:28.421742   45954 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:28.421805   45954 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.key
	I0914 22:46:28.421858   45954 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key.0216c1e7
	I0914 22:46:28.421894   45954 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key
	I0914 22:46:28.421994   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:28.422020   45954 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:28.422027   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:28.422048   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:28.422074   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:28.422095   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:28.422139   45954 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:28.422695   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:28.443528   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:46:28.463679   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:28.483317   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:28.503486   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:28.523709   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:28.544539   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:28.565904   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:28.587316   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:28.611719   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:28.632158   45954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:28.652227   45954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:28.667709   45954 ssh_runner.go:195] Run: openssl version
	I0914 22:46:28.673084   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:28.682478   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686693   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.686747   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:28.691836   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:28.701203   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:28.710996   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715353   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.715408   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:28.720765   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:28.730750   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:28.740782   45954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745186   45954 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.745250   45954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:28.750589   45954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:28.760675   45954 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:28.764920   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:28.770573   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:28.776098   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:28.783455   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:28.790699   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:28.797514   45954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:28.804265   45954 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-799144 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-799144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:28.804376   45954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:28.804427   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:28.833994   45954 cri.go:89] found id: ""
	I0914 22:46:28.834051   45954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:28.843702   45954 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:28.843724   45954 kubeadm.go:636] restartCluster start
	I0914 22:46:28.843769   45954 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:28.852802   45954 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.854420   45954 kubeconfig.go:92] found "default-k8s-diff-port-799144" server: "https://192.168.50.175:8444"
	I0914 22:46:28.858058   45954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:28.866914   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.866968   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.877946   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.877969   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:28.878014   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:28.888579   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.389311   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.389420   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.401725   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:29.889346   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:29.889451   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:29.902432   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.388985   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.389062   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.401302   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:30.888853   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:30.888949   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:30.901032   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.389622   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.389733   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.405102   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:31.888685   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:31.888803   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:31.904300   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:32.388876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.388944   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.402419   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:28.995080   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:28.999205   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:28.999224   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:28.995414   46986 retry.go:31] will retry after 1.657878081s: waiting for machine to come up
	I0914 22:46:30.655422   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:30.656029   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:30.656059   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:30.655960   46986 retry.go:31] will retry after 2.320968598s: waiting for machine to come up
	I0914 22:46:32.978950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:32.979423   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:32.979452   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:32.979369   46986 retry.go:31] will retry after 2.704173643s: waiting for machine to come up
	I0914 22:46:32.889585   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:32.889658   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:32.902514   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.388806   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.388906   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.405028   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:33.889633   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:33.889728   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:33.906250   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.388736   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.388810   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.403376   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:34.888851   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:34.888934   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:34.905873   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.389446   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.389516   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.404872   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.889475   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:35.889569   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:35.902431   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.388954   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.389054   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.401778   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:36.889442   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:36.889529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:36.902367   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:37.388925   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.389009   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.401860   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:35.685608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:35.686027   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:35.686064   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:35.685964   46986 retry.go:31] will retry after 2.240780497s: waiting for machine to come up
	I0914 22:46:37.928020   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:37.928402   46412 main.go:141] libmachine: (embed-certs-588699) DBG | unable to find current IP address of domain embed-certs-588699 in network mk-embed-certs-588699
	I0914 22:46:37.928442   46412 main.go:141] libmachine: (embed-certs-588699) DBG | I0914 22:46:37.928354   46986 retry.go:31] will retry after 2.734049647s: waiting for machine to come up
	I0914 22:46:41.860186   46713 start.go:369] acquired machines lock for "old-k8s-version-930717" in 1m21.238611742s
	I0914 22:46:41.860234   46713 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:46:41.860251   46713 fix.go:54] fixHost starting: 
	I0914 22:46:41.860683   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:41.860738   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:41.877474   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0914 22:46:41.877964   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:41.878542   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:46:41.878568   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:41.878874   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:41.879057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:46:41.879276   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:46:41.880990   46713 fix.go:102] recreateIfNeeded on old-k8s-version-930717: state=Stopped err=<nil>
	I0914 22:46:41.881019   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	W0914 22:46:41.881175   46713 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:46:41.883128   46713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-930717" ...
	I0914 22:46:37.888876   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:37.888950   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:37.901522   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.389056   45954 api_server.go:166] Checking apiserver status ...
	I0914 22:46:38.389140   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:38.400632   45954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:38.867426   45954 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:38.867461   45954 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:38.867487   45954 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:38.867557   45954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:38.898268   45954 cri.go:89] found id: ""
	I0914 22:46:38.898328   45954 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:38.914871   45954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:38.924737   45954 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:38.924785   45954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934436   45954 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:38.934455   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.042672   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:39.982954   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.158791   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.235541   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:40.312855   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:46:40.312926   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.328687   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.842859   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.343019   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:41.842336   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.342351   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:40.665315   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.665775   46412 main.go:141] libmachine: (embed-certs-588699) Found IP for machine: 192.168.61.205
	I0914 22:46:40.665795   46412 main.go:141] libmachine: (embed-certs-588699) Reserving static IP address...
	I0914 22:46:40.665807   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has current primary IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.666273   46412 main.go:141] libmachine: (embed-certs-588699) Reserved static IP address: 192.168.61.205
	I0914 22:46:40.666316   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.666334   46412 main.go:141] libmachine: (embed-certs-588699) Waiting for SSH to be available...
	I0914 22:46:40.666375   46412 main.go:141] libmachine: (embed-certs-588699) DBG | skip adding static IP to network mk-embed-certs-588699 - found existing host DHCP lease matching {name: "embed-certs-588699", mac: "52:54:00:a8:60:d3", ip: "192.168.61.205"}
	I0914 22:46:40.666401   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Getting to WaitForSSH function...
	I0914 22:46:40.668206   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668515   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.668542   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.668654   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH client type: external
	I0914 22:46:40.668689   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa (-rw-------)
	I0914 22:46:40.668716   46412 main.go:141] libmachine: (embed-certs-588699) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:46:40.668728   46412 main.go:141] libmachine: (embed-certs-588699) DBG | About to run SSH command:
	I0914 22:46:40.668736   46412 main.go:141] libmachine: (embed-certs-588699) DBG | exit 0
	I0914 22:46:40.751202   46412 main.go:141] libmachine: (embed-certs-588699) DBG | SSH cmd err, output: <nil>: 
	I0914 22:46:40.751584   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetConfigRaw
	I0914 22:46:40.752291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:40.754685   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755054   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.755087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.755318   46412 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/config.json ...
	I0914 22:46:40.755578   46412 machine.go:88] provisioning docker machine ...
	I0914 22:46:40.755603   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:40.755799   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.755940   46412 buildroot.go:166] provisioning hostname "embed-certs-588699"
	I0914 22:46:40.755959   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:40.756109   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.758111   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758435   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.758481   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.758547   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.758686   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758798   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.758983   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.759108   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.759567   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.759586   46412 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-588699 && echo "embed-certs-588699" | sudo tee /etc/hostname
	I0914 22:46:40.882559   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-588699
	
	I0914 22:46:40.882615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:40.885741   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886087   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:40.886137   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:40.886403   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:40.886635   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886810   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:40.886964   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:40.887176   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:40.887633   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:40.887662   46412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-588699' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-588699/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-588699' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:46:41.007991   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:46:41.008024   46412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:46:41.008075   46412 buildroot.go:174] setting up certificates
	I0914 22:46:41.008103   46412 provision.go:83] configureAuth start
	I0914 22:46:41.008118   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetMachineName
	I0914 22:46:41.008615   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.011893   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012262   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.012295   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.012467   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.014904   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015343   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.015378   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.015551   46412 provision.go:138] copyHostCerts
	I0914 22:46:41.015605   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:46:41.015618   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:46:41.015691   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:46:41.015847   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:46:41.015864   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:46:41.015897   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:46:41.015979   46412 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:46:41.015989   46412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:46:41.016019   46412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:46:41.016080   46412 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.embed-certs-588699 san=[192.168.61.205 192.168.61.205 localhost 127.0.0.1 minikube embed-certs-588699]
	I0914 22:46:41.134486   46412 provision.go:172] copyRemoteCerts
	I0914 22:46:41.134537   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:46:41.134559   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.137472   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137789   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.137818   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.137995   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.138216   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.138365   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.138536   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.224196   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:46:41.244551   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:46:41.267745   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:46:41.292472   46412 provision.go:86] duration metric: configureAuth took 284.355734ms
	I0914 22:46:41.292497   46412 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:46:41.292668   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:41.292748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.295661   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296010   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.296042   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.296246   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.296469   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296652   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.296836   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.297031   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.297522   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.297556   46412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:46:41.609375   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:46:41.609417   46412 machine.go:91] provisioned docker machine in 853.82264ms
	I0914 22:46:41.609431   46412 start.go:300] post-start starting for "embed-certs-588699" (driver="kvm2")
	I0914 22:46:41.609444   46412 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:46:41.609472   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.609831   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:46:41.609890   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.613037   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613497   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.613525   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.613662   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.613854   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.614023   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.614142   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.704618   46412 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:46:41.709759   46412 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:46:41.709787   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:46:41.709867   46412 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:46:41.709991   46412 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:46:41.710127   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:46:41.721261   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:41.742359   46412 start.go:303] post-start completed in 132.913862ms
	I0914 22:46:41.742387   46412 fix.go:56] fixHost completed within 19.562130605s
	I0914 22:46:41.742418   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.745650   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.746172   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.746369   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.746564   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746781   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.746944   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.747138   46412 main.go:141] libmachine: Using SSH client type: native
	I0914 22:46:41.747629   46412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.205 22 <nil> <nil>}
	I0914 22:46:41.747648   46412 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:46:41.860006   46412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731601.811427748
	
	I0914 22:46:41.860030   46412 fix.go:206] guest clock: 1694731601.811427748
	I0914 22:46:41.860040   46412 fix.go:219] Guest: 2023-09-14 22:46:41.811427748 +0000 UTC Remote: 2023-09-14 22:46:41.742391633 +0000 UTC m=+142.955285980 (delta=69.036115ms)
	I0914 22:46:41.860091   46412 fix.go:190] guest clock delta is within tolerance: 69.036115ms
	I0914 22:46:41.860098   46412 start.go:83] releasing machines lock for "embed-certs-588699", held for 19.679882828s
	I0914 22:46:41.860131   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.860411   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:41.863136   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863584   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.863618   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.863721   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864206   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864398   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:46:41.864477   46412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:46:41.864514   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.864639   46412 ssh_runner.go:195] Run: cat /version.json
	I0914 22:46:41.864666   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:46:41.867568   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867608   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.867950   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.867976   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:41.868028   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:41.868147   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868248   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:46:41.868373   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868579   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.868691   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:46:41.868833   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.868876   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:46:41.869026   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:46:41.980624   46412 ssh_runner.go:195] Run: systemctl --version
	I0914 22:46:41.986113   46412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:46:42.134956   46412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:46:42.141030   46412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:46:42.141101   46412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:46:42.158635   46412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:46:42.158660   46412 start.go:469] detecting cgroup driver to use...
	I0914 22:46:42.158722   46412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:46:42.173698   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:46:42.184948   46412 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:46:42.185007   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:46:42.196434   46412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:46:42.208320   46412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:46:42.326624   46412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:46:42.459498   46412 docker.go:212] disabling docker service ...
	I0914 22:46:42.459567   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:46:42.472479   46412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:46:42.486651   46412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:46:42.636161   46412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:46:42.739841   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:46:42.758562   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:46:42.779404   46412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:46:42.779472   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.787902   46412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:46:42.787954   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.799513   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.811428   46412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:46:42.823348   46412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:46:42.835569   46412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:46:42.842820   46412 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:46:42.842885   46412 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:46:42.855225   46412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:46:42.863005   46412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:46:42.979756   46412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:46:43.181316   46412 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:46:43.181384   46412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:46:43.191275   46412 start.go:537] Will wait 60s for crictl version
	I0914 22:46:43.191343   46412 ssh_runner.go:195] Run: which crictl
	I0914 22:46:43.196264   46412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:46:43.228498   46412 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:46:43.228589   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.281222   46412 ssh_runner.go:195] Run: crio --version
	I0914 22:46:43.341816   46412 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:46:43.343277   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetIP
	I0914 22:46:43.346473   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.346835   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:46:43.346882   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:46:43.347084   46412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 22:46:43.351205   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:43.364085   46412 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:46:43.364156   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:43.400558   46412 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:46:43.400634   46412 ssh_runner.go:195] Run: which lz4
	I0914 22:46:43.404906   46412 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:46:43.409239   46412 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:46:43.409277   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0914 22:46:41.885236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Start
	I0914 22:46:41.885399   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring networks are active...
	I0914 22:46:41.886125   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network default is active
	I0914 22:46:41.886511   46713 main.go:141] libmachine: (old-k8s-version-930717) Ensuring network mk-old-k8s-version-930717 is active
	I0914 22:46:41.886855   46713 main.go:141] libmachine: (old-k8s-version-930717) Getting domain xml...
	I0914 22:46:41.887524   46713 main.go:141] libmachine: (old-k8s-version-930717) Creating domain...
	I0914 22:46:43.317748   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting to get IP...
	I0914 22:46:43.318757   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.319197   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.319288   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.319176   47160 retry.go:31] will retry after 287.487011ms: waiting for machine to come up
	I0914 22:46:43.608890   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.609712   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.609738   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.609656   47160 retry.go:31] will retry after 289.187771ms: waiting for machine to come up
	I0914 22:46:43.900234   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:43.900655   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:43.900679   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:43.900576   47160 retry.go:31] will retry after 433.007483ms: waiting for machine to come up
	I0914 22:46:44.335318   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.335775   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.335804   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.335727   47160 retry.go:31] will retry after 383.295397ms: waiting for machine to come up
	I0914 22:46:44.720415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:44.720967   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:44.721001   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:44.720856   47160 retry.go:31] will retry after 698.454643ms: waiting for machine to come up
	I0914 22:46:45.420833   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:45.421349   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:45.421391   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:45.421297   47160 retry.go:31] will retry after 938.590433ms: waiting for machine to come up
	I0914 22:46:42.842954   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:46:42.867206   45954 api_server.go:72] duration metric: took 2.554352134s to wait for apiserver process to appear ...
	I0914 22:46:42.867238   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:46:42.867257   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.755748   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:46:46.755780   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:46:46.755832   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:46.873209   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:46.873243   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.373637   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.391311   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.391349   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:47.873646   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:47.880286   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:46:47.880323   45954 api_server.go:103] status: https://192.168.50.175:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:46:48.373423   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:46:48.389682   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:46:48.415694   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:46:48.415727   45954 api_server.go:131] duration metric: took 5.548481711s to wait for apiserver health ...
	I0914 22:46:48.415739   45954 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.415748   45954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.417375   45954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:46:45.238555   46412 crio.go:444] Took 1.833681 seconds to copy over tarball
	I0914 22:46:45.238634   46412 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:46:48.251155   46412 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012492519s)
	I0914 22:46:48.251176   46412 crio.go:451] Took 3.012596 seconds to extract the tarball
	I0914 22:46:48.251184   46412 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:46:48.290336   46412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:46:48.338277   46412 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:46:48.338302   46412 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:46:48.338378   46412 ssh_runner.go:195] Run: crio config
	I0914 22:46:48.402542   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:46:48.402564   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:46:48.402583   46412 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:46:48.402604   46412 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.205 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-588699 NodeName:embed-certs-588699 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:46:48.402791   46412 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-588699"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:46:48.402883   46412 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-588699 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:46:48.402958   46412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:46:48.414406   46412 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:46:48.414484   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:46:48.426437   46412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0914 22:46:48.445351   46412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:46:48.463696   46412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0914 22:46:48.481887   46412 ssh_runner.go:195] Run: grep 192.168.61.205	control-plane.minikube.internal$ /etc/hosts
	I0914 22:46:48.485825   46412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:46:48.500182   46412 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699 for IP: 192.168.61.205
	I0914 22:46:48.500215   46412 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:48.500362   46412 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:46:48.500417   46412 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:46:48.500514   46412 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/client.key
	I0914 22:46:48.500600   46412 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key.8dac69f7
	I0914 22:46:48.500726   46412 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key
	I0914 22:46:48.500885   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:46:48.500926   46412 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:46:48.500942   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:46:48.500976   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:46:48.501008   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:46:48.501039   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:46:48.501096   46412 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:46:48.501918   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:46:48.528790   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:46:48.558557   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:46:48.583664   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/embed-certs-588699/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:46:48.608274   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:46:48.631638   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:46:48.655163   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:46:48.677452   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:46:48.700443   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:46:48.724547   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:46:48.751559   46412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:46:48.778910   46412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:46:48.794369   46412 ssh_runner.go:195] Run: openssl version
	I0914 22:46:48.799778   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:46:48.809263   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814790   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.814848   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:46:48.820454   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:46:48.829942   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:46:46.361228   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:46.361816   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:46.361846   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:46.361795   47160 retry.go:31] will retry after 1.00738994s: waiting for machine to come up
	I0914 22:46:47.370525   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:47.370964   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:47.370991   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:47.370921   47160 retry.go:31] will retry after 1.441474351s: waiting for machine to come up
	I0914 22:46:48.813921   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:48.814415   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:48.814447   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:48.814362   47160 retry.go:31] will retry after 1.497562998s: waiting for machine to come up
	I0914 22:46:50.313674   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:50.314191   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:50.314221   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:50.314137   47160 retry.go:31] will retry after 1.620308161s: waiting for machine to come up
	I0914 22:46:48.418825   45954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:46:48.456715   45954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:46:48.496982   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:46:48.515172   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:46:48.515209   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:46:48.515223   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:46:48.515234   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:46:48.515247   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:46:48.515261   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:46:48.515272   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:46:48.515285   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:46:48.515295   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:46:48.515307   45954 system_pods.go:74] duration metric: took 18.305048ms to wait for pod list to return data ...
	I0914 22:46:48.515320   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:46:48.518842   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:46:48.518875   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:46:48.518888   45954 node_conditions.go:105] duration metric: took 3.562448ms to run NodePressure ...
	I0914 22:46:48.518908   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:50.951051   45954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.432118027s)
	I0914 22:46:50.951087   45954 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959708   45954 kubeadm.go:787] kubelet initialised
	I0914 22:46:50.959735   45954 kubeadm.go:788] duration metric: took 8.637125ms waiting for restarted kubelet to initialise ...
	I0914 22:46:50.959745   45954 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:50.966214   45954 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.975076   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975106   45954 pod_ready.go:81] duration metric: took 8.863218ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.975118   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.975129   45954 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.982438   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982471   45954 pod_ready.go:81] duration metric: took 7.330437ms waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.982485   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.982493   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:50.991067   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991102   45954 pod_ready.go:81] duration metric: took 8.574268ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:50.991115   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:50.991125   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.006696   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006732   45954 pod_ready.go:81] duration metric: took 15.595604ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.006745   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.006755   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.354645   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354678   45954 pod_ready.go:81] duration metric: took 347.913938ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.354690   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-proxy-j2qmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.354702   45954 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:51.754959   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.754998   45954 pod_ready.go:81] duration metric: took 400.283619ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:51.755012   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:51.755022   45954 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:52.156253   45954 pod_ready.go:97] node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156299   45954 pod_ready.go:81] duration metric: took 401.260791ms waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:46:52.156314   45954 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799144" hosting pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:52.156327   45954 pod_ready.go:38] duration metric: took 1.196571114s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:52.156352   45954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:46:52.169026   45954 ops.go:34] apiserver oom_adj: -16
	I0914 22:46:52.169049   45954 kubeadm.go:640] restartCluster took 23.325317121s
	I0914 22:46:52.169059   45954 kubeadm.go:406] StartCluster complete in 23.364799998s
	I0914 22:46:52.169079   45954 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.169161   45954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:46:52.171787   45954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:46:52.172077   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:46:52.172229   45954 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:46:52.172310   45954 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172332   45954 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-799144"
	I0914 22:46:52.172325   45954 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799144"
	W0914 22:46:52.172340   45954 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:46:52.172347   45954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799144"
	I0914 22:46:52.172351   45954 config.go:182] Loaded profile config "default-k8s-diff-port-799144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:46:52.172394   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.172394   45954 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799144"
	I0914 22:46:52.172424   45954 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.172436   45954 addons.go:240] addon metrics-server should already be in state true
	I0914 22:46:52.172500   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.173205   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173252   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173383   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173451   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.173744   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.173822   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.178174   45954 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-799144" context rescaled to 1 replicas
	I0914 22:46:52.178208   45954 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.175 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:46:52.180577   45954 out.go:177] * Verifying Kubernetes components...
	I0914 22:46:52.182015   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:46:52.194030   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0914 22:46:52.194040   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38817
	I0914 22:46:52.194506   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.194767   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.195059   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195078   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195219   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.195235   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.195420   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.195642   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.195715   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.196346   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.196392   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.198560   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0914 22:46:52.199130   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.199612   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.199641   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.199995   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.200530   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.200575   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.206536   45954 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-799144"
	W0914 22:46:52.206558   45954 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:46:52.206584   45954 host.go:66] Checking if "default-k8s-diff-port-799144" exists ...
	I0914 22:46:52.206941   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.206973   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.215857   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0914 22:46:52.216266   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.216801   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.216825   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.217297   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.217484   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.220211   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40683
	I0914 22:46:52.220740   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.221296   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.221314   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.221798   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.221986   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.222185   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.224162   45954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:46:52.224261   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.225483   45954 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.225494   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:46:52.225511   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.225526   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0914 22:46:52.227067   45954 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:46:52.225976   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.228337   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:46:52.228354   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:46:52.228373   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.228750   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.228764   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.228959   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229601   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.229674   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.229702   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.229908   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.230068   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.230171   45954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:46:52.230203   45954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:46:52.230280   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.230503   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.232673   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233097   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.233153   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.233332   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.233536   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.233684   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.233821   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.251500   45954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43473
	I0914 22:46:52.252069   45954 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:46:52.252702   45954 main.go:141] libmachine: Using API Version  1
	I0914 22:46:52.252722   45954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:46:52.253171   45954 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:46:52.253419   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetState
	I0914 22:46:52.255233   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .DriverName
	I0914 22:46:52.255574   45954 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.255591   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:46:52.255609   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHHostname
	I0914 22:46:52.258620   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259146   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:44:c7", ip: ""} in network mk-default-k8s-diff-port-799144: {Iface:virbr2 ExpiryTime:2023-09-14 23:46:14 +0000 UTC Type:0 Mac:52:54:00:ee:44:c7 Iaid: IPaddr:192.168.50.175 Prefix:24 Hostname:default-k8s-diff-port-799144 Clientid:01:52:54:00:ee:44:c7}
	I0914 22:46:52.259178   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | domain default-k8s-diff-port-799144 has defined IP address 192.168.50.175 and MAC address 52:54:00:ee:44:c7 in network mk-default-k8s-diff-port-799144
	I0914 22:46:52.259379   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHPort
	I0914 22:46:52.259584   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHKeyPath
	I0914 22:46:52.259754   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .GetSSHUsername
	I0914 22:46:52.259961   45954 sshutil.go:53] new ssh client: &{IP:192.168.50.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/default-k8s-diff-port-799144/id_rsa Username:docker}
	I0914 22:46:52.350515   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:46:52.367291   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:46:52.367309   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:46:52.413141   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:46:52.413170   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:46:52.419647   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:46:52.462672   45954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:52.462698   45954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:46:52.519331   45954 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:46:52.519330   45954 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:52.530851   45954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:46:53.719523   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368967292s)
	I0914 22:46:53.719575   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719582   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299890259s)
	I0914 22:46:53.719616   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.719638   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.719589   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720079   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720083   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720097   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720101   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720103   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720107   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720111   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720119   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720121   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720080   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720404   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720414   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720425   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720444   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.720501   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720525   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.720538   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.720553   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.720804   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.720822   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.721724   45954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.190817165s)
	I0914 22:46:53.721771   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.721784   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.722084   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.722100   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.722089   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.722115   45954 main.go:141] libmachine: Making call to close driver server
	I0914 22:46:53.722128   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) Calling .Close
	I0914 22:46:53.723592   45954 main.go:141] libmachine: (default-k8s-diff-port-799144) DBG | Closing plugin on server side
	I0914 22:46:53.723602   45954 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:46:53.723614   45954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:46:53.723631   45954 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-799144"
	I0914 22:46:53.725666   45954 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:46:48.840421   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.179960   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.180026   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:46:49.185490   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:46:49.194744   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:46:49.205937   46412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210532   46412 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.210582   46412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:46:49.215917   46412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:46:49.225393   46412 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:46:49.229604   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:46:49.234795   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:46:49.239907   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:46:49.245153   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:46:49.250558   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:46:49.256142   46412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:46:49.261518   46412 kubeadm.go:404] StartCluster: {Name:embed-certs-588699 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-588699 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:46:49.261618   46412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:46:49.261687   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:49.291460   46412 cri.go:89] found id: ""
	I0914 22:46:49.291560   46412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:46:49.300496   46412 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:46:49.300558   46412 kubeadm.go:636] restartCluster start
	I0914 22:46:49.300616   46412 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:46:49.309827   46412 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.311012   46412 kubeconfig.go:92] found "embed-certs-588699" server: "https://192.168.61.205:8443"
	I0914 22:46:49.313336   46412 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:46:49.321470   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.321528   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.332257   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.332275   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.332320   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.345427   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:49.846146   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:49.846240   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:49.859038   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.345492   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.345583   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.358070   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:50.845544   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:50.845605   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:50.861143   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.345602   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.345675   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.357406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.845964   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:51.846082   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:51.860079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.346093   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.346159   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.360952   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:52.845612   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:52.845717   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:52.860504   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:53.345991   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.360947   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:51.936297   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:51.936809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:51.936840   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:51.936747   47160 retry.go:31] will retry after 2.284330296s: waiting for machine to come up
	I0914 22:46:54.222960   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:54.223478   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:54.223530   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:54.223417   47160 retry.go:31] will retry after 3.537695113s: waiting for machine to come up
	I0914 22:46:53.726984   45954 addons.go:502] enable addons completed in 1.554762762s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:46:54.641725   45954 node_ready.go:58] node "default-k8s-diff-port-799144" has status "Ready":"False"
	I0914 22:46:57.141217   45954 node_ready.go:49] node "default-k8s-diff-port-799144" has status "Ready":"True"
	I0914 22:46:57.141240   45954 node_ready.go:38] duration metric: took 4.621872993s waiting for node "default-k8s-diff-port-799144" to be "Ready" ...
	I0914 22:46:57.141250   45954 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:46:57.151019   45954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162159   45954 pod_ready.go:92] pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace has status "Ready":"True"
	I0914 22:46:57.162180   45954 pod_ready.go:81] duration metric: took 11.133949ms waiting for pod "coredns-5dd5756b68-8phxz" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:57.162189   45954 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:46:53.845734   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:53.845815   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:53.858406   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.346078   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.346138   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.360079   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:54.845738   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:54.845801   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:54.861945   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.346533   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.346627   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.360445   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:55.845577   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:55.845681   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:55.856800   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.346374   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.346461   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.357724   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:56.846264   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:56.846376   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:56.857963   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.346006   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.346074   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.357336   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.845877   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:57.845944   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:57.857310   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:58.345855   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.345925   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.357766   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:57.762315   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:46:57.762689   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | unable to find current IP address of domain old-k8s-version-930717 in network mk-old-k8s-version-930717
	I0914 22:46:57.762714   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | I0914 22:46:57.762651   47160 retry.go:31] will retry after 3.773493672s: waiting for machine to come up
	I0914 22:46:59.185077   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:01.185320   45954 pod_ready.go:102] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:02.912525   45407 start.go:369] acquired machines lock for "no-preload-344363" in 55.358672707s
	I0914 22:47:02.912580   45407 start.go:96] Skipping create...Using existing machine configuration
	I0914 22:47:02.912592   45407 fix.go:54] fixHost starting: 
	I0914 22:47:02.913002   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:47:02.913035   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:47:02.932998   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0914 22:47:02.933535   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:47:02.933956   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:47:02.933977   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:47:02.934303   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:47:02.934484   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:02.934627   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:47:02.936412   45407 fix.go:102] recreateIfNeeded on no-preload-344363: state=Stopped err=<nil>
	I0914 22:47:02.936438   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	W0914 22:47:02.936601   45407 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 22:47:02.938235   45407 out.go:177] * Restarting existing kvm2 VM for "no-preload-344363" ...
	I0914 22:46:58.845728   46412 api_server.go:166] Checking apiserver status ...
	I0914 22:46:58.845806   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:46:58.859436   46412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:46:59.322167   46412 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:46:59.322206   46412 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:46:59.322218   46412 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:46:59.322278   46412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:46:59.352268   46412 cri.go:89] found id: ""
	I0914 22:46:59.352371   46412 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:46:59.366742   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:46:59.374537   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:46:59.374598   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382227   46412 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:46:59.382251   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:46:59.486171   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.268311   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.462362   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.528925   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:00.601616   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:00.601697   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:00.623311   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.140972   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:01.640574   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.141044   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:02.640374   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.140881   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:03.166662   46412 api_server.go:72] duration metric: took 2.565044214s to wait for apiserver process to appear ...
	I0914 22:47:03.166688   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:03.166703   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:01.540578   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541058   46713 main.go:141] libmachine: (old-k8s-version-930717) Found IP for machine: 192.168.72.70
	I0914 22:47:01.541095   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has current primary IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.541106   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserving static IP address...
	I0914 22:47:01.541552   46713 main.go:141] libmachine: (old-k8s-version-930717) Reserved static IP address: 192.168.72.70
	I0914 22:47:01.541579   46713 main.go:141] libmachine: (old-k8s-version-930717) Waiting for SSH to be available...
	I0914 22:47:01.541613   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.541646   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | skip adding static IP to network mk-old-k8s-version-930717 - found existing host DHCP lease matching {name: "old-k8s-version-930717", mac: "52:54:00:12:a5:28", ip: "192.168.72.70"}
	I0914 22:47:01.541672   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Getting to WaitForSSH function...
	I0914 22:47:01.543898   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544285   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.544317   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.544428   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH client type: external
	I0914 22:47:01.544451   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa (-rw-------)
	I0914 22:47:01.544499   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:01.544518   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | About to run SSH command:
	I0914 22:47:01.544552   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | exit 0
	I0914 22:47:01.639336   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:01.639694   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetConfigRaw
	I0914 22:47:01.640324   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.642979   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643345   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.643389   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.643643   46713 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/config.json ...
	I0914 22:47:01.643833   46713 machine.go:88] provisioning docker machine ...
	I0914 22:47:01.643855   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:01.644085   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644249   46713 buildroot.go:166] provisioning hostname "old-k8s-version-930717"
	I0914 22:47:01.644272   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.644434   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.646429   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.646771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.646819   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.647008   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.647209   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647360   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.647536   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.647737   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.648245   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.648270   46713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-930717 && echo "old-k8s-version-930717" | sudo tee /etc/hostname
	I0914 22:47:01.789438   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-930717
	
	I0914 22:47:01.789472   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.792828   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793229   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.793277   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.793459   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:01.793644   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793778   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:01.793953   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:01.794120   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:01.794459   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:01.794478   46713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-930717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-930717/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-930717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:01.928496   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:01.928536   46713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:01.928567   46713 buildroot.go:174] setting up certificates
	I0914 22:47:01.928586   46713 provision.go:83] configureAuth start
	I0914 22:47:01.928609   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetMachineName
	I0914 22:47:01.928914   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:01.931976   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932368   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.932398   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.932542   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:01.934939   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935311   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:01.935344   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:01.935480   46713 provision.go:138] copyHostCerts
	I0914 22:47:01.935537   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:01.935548   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:01.935620   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:01.935775   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:01.935789   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:01.935824   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:01.935970   46713 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:01.935981   46713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:01.936010   46713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:01.936086   46713 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-930717 san=[192.168.72.70 192.168.72.70 localhost 127.0.0.1 minikube old-k8s-version-930717]
	I0914 22:47:02.167446   46713 provision.go:172] copyRemoteCerts
	I0914 22:47:02.167510   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:02.167534   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.170442   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.170862   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.170900   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.171089   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.171302   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.171496   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.171645   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.267051   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:02.289098   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 22:47:02.312189   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:02.334319   46713 provision.go:86] duration metric: configureAuth took 405.716896ms
	I0914 22:47:02.334346   46713 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:02.334555   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:47:02.334638   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.337255   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337605   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.337637   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.337730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.337949   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338100   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.338240   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.338384   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.338859   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.338890   46713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:02.654307   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:02.654332   46713 machine.go:91] provisioned docker machine in 1.010485195s
	I0914 22:47:02.654345   46713 start.go:300] post-start starting for "old-k8s-version-930717" (driver="kvm2")
	I0914 22:47:02.654358   46713 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:02.654382   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.654747   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:02.654782   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.657773   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658153   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.658182   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.658425   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.658630   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.658812   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.659001   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.750387   46713 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:02.754444   46713 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:02.754468   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:02.754545   46713 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:02.754654   46713 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:02.754762   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:02.765781   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:02.788047   46713 start.go:303] post-start completed in 133.686385ms
	I0914 22:47:02.788072   46713 fix.go:56] fixHost completed within 20.927830884s
	I0914 22:47:02.788098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.791051   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791408   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.791441   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.791628   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.791840   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792041   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.792215   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.792383   46713 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:02.792817   46713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.70 22 <nil> <nil>}
	I0914 22:47:02.792836   46713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:02.912359   46713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731622.856601606
	
	I0914 22:47:02.912381   46713 fix.go:206] guest clock: 1694731622.856601606
	I0914 22:47:02.912391   46713 fix.go:219] Guest: 2023-09-14 22:47:02.856601606 +0000 UTC Remote: 2023-09-14 22:47:02.788077838 +0000 UTC m=+102.306332554 (delta=68.523768ms)
	I0914 22:47:02.912413   46713 fix.go:190] guest clock delta is within tolerance: 68.523768ms
	I0914 22:47:02.912424   46713 start.go:83] releasing machines lock for "old-k8s-version-930717", held for 21.052207532s
	I0914 22:47:02.912457   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.912730   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:02.915769   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916200   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.916265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.916453   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917073   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917245   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:47:02.917352   46713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:02.917397   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.917535   46713 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:02.917563   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:47:02.920256   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920363   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920656   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920695   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920724   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:02.920744   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:02.920959   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921098   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:47:02.921261   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921282   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:47:02.921431   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921489   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:47:02.921567   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:02.921635   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:47:03.014070   46713 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:03.047877   46713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:03.192347   46713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:03.200249   46713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:03.200324   46713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:03.215110   46713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:03.215138   46713 start.go:469] detecting cgroup driver to use...
	I0914 22:47:03.215201   46713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:03.228736   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:03.241326   46713 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:03.241377   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:03.253001   46713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:03.264573   46713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:03.371107   46713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:03.512481   46713 docker.go:212] disabling docker service ...
	I0914 22:47:03.512554   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:03.526054   46713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:03.537583   46713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:03.662087   46713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:03.793448   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:03.807574   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:03.828240   46713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0914 22:47:03.828311   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.842435   46713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:03.842490   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.856199   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.867448   46713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:03.878222   46713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:03.891806   46713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:03.899686   46713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:03.899740   46713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:03.912584   46713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:03.920771   46713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:04.040861   46713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:04.230077   46713 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:04.230147   46713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:04.235664   46713 start.go:537] Will wait 60s for crictl version
	I0914 22:47:04.235726   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:04.239737   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:04.279680   46713 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:04.279755   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.329363   46713 ssh_runner.go:195] Run: crio --version
	I0914 22:47:04.389025   46713 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0914 22:47:02.939505   45407 main.go:141] libmachine: (no-preload-344363) Calling .Start
	I0914 22:47:02.939701   45407 main.go:141] libmachine: (no-preload-344363) Ensuring networks are active...
	I0914 22:47:02.940415   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network default is active
	I0914 22:47:02.940832   45407 main.go:141] libmachine: (no-preload-344363) Ensuring network mk-no-preload-344363 is active
	I0914 22:47:02.941287   45407 main.go:141] libmachine: (no-preload-344363) Getting domain xml...
	I0914 22:47:02.942103   45407 main.go:141] libmachine: (no-preload-344363) Creating domain...
	I0914 22:47:04.410207   45407 main.go:141] libmachine: (no-preload-344363) Waiting to get IP...
	I0914 22:47:04.411192   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.411669   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.411744   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.411647   47373 retry.go:31] will retry after 198.435142ms: waiting for machine to come up
	I0914 22:47:04.612435   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.612957   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.613025   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.612934   47373 retry.go:31] will retry after 350.950211ms: waiting for machine to come up
	I0914 22:47:04.965570   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:04.966332   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:04.966458   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:04.966377   47373 retry.go:31] will retry after 398.454996ms: waiting for machine to come up
	I0914 22:47:04.390295   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetIP
	I0914 22:47:04.393815   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394249   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:47:04.394282   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:47:04.394543   46713 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:04.398850   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:04.411297   46713 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:47:04.411363   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:04.443950   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:04.444023   46713 ssh_runner.go:195] Run: which lz4
	I0914 22:47:04.448422   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:47:04.453479   46713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:47:04.453505   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0914 22:47:03.686086   45954 pod_ready.go:92] pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.686112   45954 pod_ready.go:81] duration metric: took 6.523915685s waiting for pod "etcd-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.686125   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692434   45954 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.692454   45954 pod_ready.go:81] duration metric: took 6.320818ms waiting for pod "kube-apiserver-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.692466   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698065   45954 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.698088   45954 pod_ready.go:81] duration metric: took 5.613243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.698100   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703688   45954 pod_ready.go:92] pod "kube-proxy-j2qmv" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.703706   45954 pod_ready.go:81] duration metric: took 5.599421ms waiting for pod "kube-proxy-j2qmv" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.703718   45954 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708487   45954 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:03.708505   45954 pod_ready.go:81] duration metric: took 4.779322ms waiting for pod "kube-scheduler-default-k8s-diff-port-799144" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:03.708516   45954 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:05.993620   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:07.475579   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.475617   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:07.475631   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:07.531335   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:07.531366   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:08.032057   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.039350   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.039384   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:08.531559   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:08.538857   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:47:08.538891   46412 api_server.go:103] status: https://192.168.61.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:47:09.031899   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:47:09.037891   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:47:09.047398   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:47:09.047426   46412 api_server.go:131] duration metric: took 5.880732639s to wait for apiserver health ...
	I0914 22:47:09.047434   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:47:09.047440   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:09.049137   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:05.366070   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.366812   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.366844   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.366740   47373 retry.go:31] will retry after 471.857141ms: waiting for machine to come up
	I0914 22:47:05.840519   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:05.841198   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:05.841229   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:05.841150   47373 retry.go:31] will retry after 632.189193ms: waiting for machine to come up
	I0914 22:47:06.475175   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:06.475769   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:06.475800   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:06.475704   47373 retry.go:31] will retry after 866.407813ms: waiting for machine to come up
	I0914 22:47:07.344343   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:07.344865   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:07.344897   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:07.344815   47373 retry.go:31] will retry after 1.101301607s: waiting for machine to come up
	I0914 22:47:08.448452   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:08.449070   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:08.449111   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:08.449014   47373 retry.go:31] will retry after 995.314765ms: waiting for machine to come up
	I0914 22:47:09.446294   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:09.446708   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:09.446740   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:09.446653   47373 retry.go:31] will retry after 1.180552008s: waiting for machine to come up
	I0914 22:47:05.984485   46713 crio.go:444] Took 1.536109 seconds to copy over tarball
	I0914 22:47:05.984562   46713 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:47:09.247825   46713 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.263230608s)
	I0914 22:47:09.247858   46713 crio.go:451] Took 3.263345 seconds to extract the tarball
	I0914 22:47:09.247871   46713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:47:09.289821   46713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:09.340429   46713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0914 22:47:09.340463   46713 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:09.340544   46713 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.340568   46713 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0914 22:47:09.340535   46713 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.340531   46713 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.340789   46713 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.340811   46713 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.340886   46713 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.340793   46713 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.342655   46713 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.342658   46713 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.342636   46713 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:09.342635   46713 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.342633   46713 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.342793   46713 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.561063   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0914 22:47:09.564079   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.564246   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.564957   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.566014   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.571757   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.578469   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.687502   46713 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0914 22:47:09.687548   46713 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0914 22:47:09.687591   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.727036   46713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0914 22:47:09.727085   46713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.727140   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0914 22:47:09.737952   46713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.737905   46713 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0914 22:47:09.737986   46713 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.737990   46713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0914 22:47:09.738002   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738013   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.738023   46713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.738063   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.744728   46713 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0914 22:47:09.744768   46713 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.744813   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753014   46713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0914 22:47:09.753055   46713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.753080   46713 ssh_runner.go:195] Run: which crictl
	I0914 22:47:09.753104   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0914 22:47:09.753056   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0914 22:47:09.753149   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0914 22:47:09.753193   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0914 22:47:09.753213   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0914 22:47:09.758372   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0914 22:47:09.758544   46713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0914 22:47:09.875271   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0914 22:47:09.875299   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0914 22:47:09.875357   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0914 22:47:09.875382   46713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.875404   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0914 22:47:09.876393   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0914 22:47:09.878339   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0914 22:47:09.878491   46713 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0914 22:47:09.881457   46713 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0914 22:47:09.881475   46713 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0914 22:47:09.881521   46713 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0914 22:47:08.496805   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.993044   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:09.050966   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:09.061912   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:09.096783   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:09.111938   46412 system_pods.go:59] 8 kube-system pods found
	I0914 22:47:09.111976   46412 system_pods.go:61] "coredns-5dd5756b68-zrd8r" [5b5f18a0-d6ee-42f2-b31a-4f8555b50388] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:09.111988   46412 system_pods.go:61] "etcd-embed-certs-588699" [b32d61b5-8c3f-4980-9f0f-c08630be9c36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:47:09.112001   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [58ac976e-7a8c-4aee-9ee5-b92bd7e897b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:47:09.112015   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [3f9587f5-fe32-446a-a4c9-cb679b177937] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:47:09.112036   46412 system_pods.go:61] "kube-proxy-l8pq9" [4aecae33-dcd9-4ec6-a537-ecbb076c44d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 22:47:09.112052   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [f23ab185-f4c2-4e39-936d-51d51538b0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:47:09.112066   46412 system_pods.go:61] "metrics-server-57f55c9bc5-zvk82" [3c48277c-4604-4a83-82ea-2776cf0d0537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:47:09.112077   46412 system_pods.go:61] "storage-provisioner" [f0acbbe1-c326-4863-ae2e-d2d3e5be07c1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:47:09.112090   46412 system_pods.go:74] duration metric: took 15.280254ms to wait for pod list to return data ...
	I0914 22:47:09.112103   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:09.119686   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:09.119725   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:09.119747   46412 node_conditions.go:105] duration metric: took 7.637688ms to run NodePressure ...
	I0914 22:47:09.119768   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:09.407351   46412 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414338   46412 kubeadm.go:787] kubelet initialised
	I0914 22:47:09.414361   46412 kubeadm.go:788] duration metric: took 6.974234ms waiting for restarted kubelet to initialise ...
	I0914 22:47:09.414369   46412 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:47:09.424482   46412 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:12.171133   46412 pod_ready.go:102] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:10.628919   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:10.629418   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:10.629449   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:10.629366   47373 retry.go:31] will retry after 1.486310454s: waiting for machine to come up
	I0914 22:47:12.117762   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:12.118350   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:12.118381   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:12.118295   47373 retry.go:31] will retry after 2.678402115s: waiting for machine to come up
	I0914 22:47:14.798599   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:14.799127   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:14.799160   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:14.799060   47373 retry.go:31] will retry after 2.724185493s: waiting for machine to come up
	I0914 22:47:10.647242   46713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:12.244764   46713 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.363213143s)
	I0914 22:47:12.244798   46713 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0914 22:47:12.244823   46713 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.013457524s)
	I0914 22:47:12.244888   46713 cache_images.go:92] LoadImages completed in 2.904411161s
	W0914 22:47:12.244978   46713 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0914 22:47:12.245070   46713 ssh_runner.go:195] Run: crio config
	I0914 22:47:12.328636   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:12.328663   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:12.328687   46713 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:12.328710   46713 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-930717 NodeName:old-k8s-version-930717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 22:47:12.328882   46713 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-930717"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-930717
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:12.328984   46713 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-930717 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:12.329062   46713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0914 22:47:12.339084   46713 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:12.339169   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:12.348354   46713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 22:47:12.369083   46713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:12.388242   46713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0914 22:47:12.407261   46713 ssh_runner.go:195] Run: grep 192.168.72.70	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:12.411055   46713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:12.425034   46713 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717 for IP: 192.168.72.70
	I0914 22:47:12.425070   46713 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:12.425236   46713 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:12.425283   46713 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:12.425372   46713 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.key
	I0914 22:47:12.425451   46713 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key.382dacf3
	I0914 22:47:12.425512   46713 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key
	I0914 22:47:12.425642   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:12.425671   46713 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:12.425685   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:12.425708   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:12.425732   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:12.425751   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:12.425789   46713 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:12.426339   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:12.456306   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:47:12.486038   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:12.520941   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 22:47:12.552007   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:12.589620   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:12.619358   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:12.650395   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:12.678898   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:12.704668   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:12.730499   46713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:12.755286   46713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:12.773801   46713 ssh_runner.go:195] Run: openssl version
	I0914 22:47:12.781147   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:12.793953   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799864   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.799922   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:12.806881   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:12.817936   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:12.830758   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836538   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.836613   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:12.843368   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:12.855592   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:12.866207   46713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871317   46713 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.871368   46713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:12.878438   46713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:12.891012   46713 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:12.895887   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:12.902284   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:12.909482   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:12.916524   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:12.924045   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:12.929935   46713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:12.937292   46713 kubeadm.go:404] StartCluster: {Name:old-k8s-version-930717 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-930717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:12.937417   46713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:12.937470   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:12.975807   46713 cri.go:89] found id: ""
	I0914 22:47:12.975902   46713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:12.988356   46713 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:12.988379   46713 kubeadm.go:636] restartCluster start
	I0914 22:47:12.988434   46713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:13.000294   46713 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.001492   46713 kubeconfig.go:92] found "old-k8s-version-930717" server: "https://192.168.72.70:8443"
	I0914 22:47:13.008583   46713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:13.023004   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.023065   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.037604   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.037625   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.037671   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.048939   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:13.549653   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:13.549746   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:13.561983   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.049481   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.049588   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.064694   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:14.549101   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:14.549195   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:14.564858   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:15.049112   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.049206   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.063428   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:12.993654   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:14.995358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:13.946979   46412 pod_ready.go:92] pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:13.947004   46412 pod_ready.go:81] duration metric: took 4.522495708s waiting for pod "coredns-5dd5756b68-zrd8r" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:13.947013   46412 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:15.968061   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:18.465595   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:17.526472   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:17.526915   45407 main.go:141] libmachine: (no-preload-344363) DBG | unable to find current IP address of domain no-preload-344363 in network mk-no-preload-344363
	I0914 22:47:17.526946   45407 main.go:141] libmachine: (no-preload-344363) DBG | I0914 22:47:17.526867   47373 retry.go:31] will retry after 3.587907236s: waiting for machine to come up
	I0914 22:47:15.549179   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:15.549273   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:15.561977   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.049593   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.049678   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.063654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:16.549178   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:16.549248   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:16.561922   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.049041   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.049131   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.062442   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.550005   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:17.550066   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:17.561254   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.049855   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.049932   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.062226   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:18.549845   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:18.549941   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:18.561219   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.049739   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.049829   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.061225   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:19.550035   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:19.550112   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:19.561546   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:20.049979   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.050080   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.061478   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:17.489830   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:19.490802   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.490931   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:21.118871   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119369   45407 main.go:141] libmachine: (no-preload-344363) Found IP for machine: 192.168.39.60
	I0914 22:47:21.119391   45407 main.go:141] libmachine: (no-preload-344363) Reserving static IP address...
	I0914 22:47:21.119418   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has current primary IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.119860   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.119888   45407 main.go:141] libmachine: (no-preload-344363) Reserved static IP address: 192.168.39.60
	I0914 22:47:21.119906   45407 main.go:141] libmachine: (no-preload-344363) DBG | skip adding static IP to network mk-no-preload-344363 - found existing host DHCP lease matching {name: "no-preload-344363", mac: "52:54:00:de:ec:3d", ip: "192.168.39.60"}
	I0914 22:47:21.119931   45407 main.go:141] libmachine: (no-preload-344363) DBG | Getting to WaitForSSH function...
	I0914 22:47:21.119949   45407 main.go:141] libmachine: (no-preload-344363) Waiting for SSH to be available...
	I0914 22:47:21.121965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122282   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.122312   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.122392   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH client type: external
	I0914 22:47:21.122429   45407 main.go:141] libmachine: (no-preload-344363) DBG | Using SSH private key: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa (-rw-------)
	I0914 22:47:21.122482   45407 main.go:141] libmachine: (no-preload-344363) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 22:47:21.122510   45407 main.go:141] libmachine: (no-preload-344363) DBG | About to run SSH command:
	I0914 22:47:21.122521   45407 main.go:141] libmachine: (no-preload-344363) DBG | exit 0
	I0914 22:47:21.206981   45407 main.go:141] libmachine: (no-preload-344363) DBG | SSH cmd err, output: <nil>: 
	I0914 22:47:21.207366   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetConfigRaw
	I0914 22:47:21.208066   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.210323   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210607   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.210639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.210795   45407 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/config.json ...
	I0914 22:47:21.211016   45407 machine.go:88] provisioning docker machine ...
	I0914 22:47:21.211036   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:21.211258   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211431   45407 buildroot.go:166] provisioning hostname "no-preload-344363"
	I0914 22:47:21.211455   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.211629   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.213574   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.213887   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.213921   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.214015   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.214181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214338   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.214461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.214648   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.215041   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.215056   45407 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-344363 && echo "no-preload-344363" | sudo tee /etc/hostname
	I0914 22:47:21.347323   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-344363
	
	I0914 22:47:21.347358   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.350445   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.350846   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.350882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.351144   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.351393   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351599   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.351766   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.351944   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.352264   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.352291   45407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-344363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-344363/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-344363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:47:21.471619   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:47:21.471648   45407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17243-6287/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-6287/.minikube}
	I0914 22:47:21.471671   45407 buildroot.go:174] setting up certificates
	I0914 22:47:21.471683   45407 provision.go:83] configureAuth start
	I0914 22:47:21.471696   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetMachineName
	I0914 22:47:21.472019   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:21.474639   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475113   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.475141   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.475293   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.477627   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.477976   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.478009   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.478148   45407 provision.go:138] copyHostCerts
	I0914 22:47:21.478189   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem, removing ...
	I0914 22:47:21.478198   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem
	I0914 22:47:21.478249   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/ca.pem (1078 bytes)
	I0914 22:47:21.478336   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem, removing ...
	I0914 22:47:21.478344   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem
	I0914 22:47:21.478362   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/cert.pem (1123 bytes)
	I0914 22:47:21.478416   45407 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem, removing ...
	I0914 22:47:21.478423   45407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem
	I0914 22:47:21.478439   45407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-6287/.minikube/key.pem (1679 bytes)
	I0914 22:47:21.478482   45407 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem org=jenkins.no-preload-344363 san=[192.168.39.60 192.168.39.60 localhost 127.0.0.1 minikube no-preload-344363]
	I0914 22:47:21.546956   45407 provision.go:172] copyRemoteCerts
	I0914 22:47:21.547006   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:47:21.547029   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.549773   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550217   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.550257   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.550468   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.550683   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.550850   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.551050   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:21.635939   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:47:21.656944   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0914 22:47:21.679064   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 22:47:21.701127   45407 provision.go:86] duration metric: configureAuth took 229.434247ms
	I0914 22:47:21.701147   45407 buildroot.go:189] setting minikube options for container-runtime
	I0914 22:47:21.701319   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:47:21.701381   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:21.704100   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704475   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:21.704512   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:21.704672   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:21.704865   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705046   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:21.705218   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:21.705382   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:21.705828   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:21.705849   45407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:47:22.037291   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:47:22.037337   45407 machine.go:91] provisioned docker machine in 826.295956ms
	I0914 22:47:22.037350   45407 start.go:300] post-start starting for "no-preload-344363" (driver="kvm2")
	I0914 22:47:22.037363   45407 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:47:22.037396   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.037704   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:47:22.037729   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.040372   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040729   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.040757   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.040896   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.041082   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.041266   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.041373   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.129612   45407 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:47:22.133522   45407 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 22:47:22.133550   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/addons for local assets ...
	I0914 22:47:22.133625   45407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-6287/.minikube/files for local assets ...
	I0914 22:47:22.133715   45407 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem -> 134852.pem in /etc/ssl/certs
	I0914 22:47:22.133844   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:47:22.142411   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:22.165470   45407 start.go:303] post-start completed in 128.106418ms
	I0914 22:47:22.165496   45407 fix.go:56] fixHost completed within 19.252903923s
	I0914 22:47:22.165524   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.168403   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168696   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.168731   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.168894   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.169095   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169248   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.169384   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.169571   45407 main.go:141] libmachine: Using SSH client type: native
	I0914 22:47:22.169891   45407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0914 22:47:22.169904   45407 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 22:47:22.284038   45407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694731642.258576336
	
	I0914 22:47:22.284062   45407 fix.go:206] guest clock: 1694731642.258576336
	I0914 22:47:22.284071   45407 fix.go:219] Guest: 2023-09-14 22:47:22.258576336 +0000 UTC Remote: 2023-09-14 22:47:22.16550191 +0000 UTC m=+357.203571663 (delta=93.074426ms)
	I0914 22:47:22.284107   45407 fix.go:190] guest clock delta is within tolerance: 93.074426ms
	I0914 22:47:22.284117   45407 start.go:83] releasing machines lock for "no-preload-344363", held for 19.371563772s
	I0914 22:47:22.284146   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.284388   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:22.286809   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287091   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.287133   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.287288   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287782   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.287978   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:47:22.288050   45407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:47:22.288085   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.288176   45407 ssh_runner.go:195] Run: cat /version.json
	I0914 22:47:22.288197   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:47:22.290608   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.290936   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.290965   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291067   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291157   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291345   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291516   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.291529   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:22.291554   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:22.291649   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.291706   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:47:22.291837   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:47:22.291975   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:47:22.292158   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:47:22.417570   45407 ssh_runner.go:195] Run: systemctl --version
	I0914 22:47:22.423145   45407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:47:22.563752   45407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 22:47:22.569625   45407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 22:47:22.569718   45407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:47:22.585504   45407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 22:47:22.585527   45407 start.go:469] detecting cgroup driver to use...
	I0914 22:47:22.585610   45407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:47:22.599600   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:47:22.612039   45407 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:47:22.612080   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:47:22.624817   45407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:47:22.637141   45407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:47:22.744181   45407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:47:22.864420   45407 docker.go:212] disabling docker service ...
	I0914 22:47:22.864490   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:47:22.877360   45407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:47:22.888786   45407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:47:23.000914   45407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:47:23.137575   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:47:23.150682   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:47:23.167898   45407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:47:23.167966   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.176916   45407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:47:23.176991   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.185751   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.195260   45407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:47:23.204852   45407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:47:23.214303   45407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:47:23.222654   45407 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 22:47:23.222717   45407 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 22:47:23.235654   45407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:47:23.244081   45407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:47:23.357943   45407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:47:23.521315   45407 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:47:23.521410   45407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:47:23.526834   45407 start.go:537] Will wait 60s for crictl version
	I0914 22:47:23.526889   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:23.530250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:47:23.562270   45407 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0914 22:47:23.562358   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.606666   45407 ssh_runner.go:195] Run: crio --version
	I0914 22:47:23.658460   45407 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0914 22:47:20.467600   46412 pod_ready.go:102] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:20.964310   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.964331   46412 pod_ready.go:81] duration metric: took 7.017312906s waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.964349   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968539   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.968555   46412 pod_ready.go:81] duration metric: took 4.200242ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.968563   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973180   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.973194   46412 pod_ready.go:81] duration metric: took 4.625123ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.973206   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977403   46412 pod_ready.go:92] pod "kube-proxy-l8pq9" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:20.977418   46412 pod_ready.go:81] duration metric: took 4.206831ms waiting for pod "kube-proxy-l8pq9" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:20.977425   46412 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375236   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:47:22.375259   46412 pod_ready.go:81] duration metric: took 1.397826525s waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:22.375271   46412 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	I0914 22:47:23.659885   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetIP
	I0914 22:47:23.662745   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663195   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:47:23.663228   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:47:23.663452   45407 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 22:47:23.667637   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:23.678881   45407 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:47:23.678929   45407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:47:23.708267   45407 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0914 22:47:23.708309   45407 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:47:23.708390   45407 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.708421   45407 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.708424   45407 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0914 22:47:23.708437   45407 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.708425   45407 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.708537   45407 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.708403   45407 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.708393   45407 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.709903   45407 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.709895   45407 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.709887   45407 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:23.709899   45407 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.710189   45407 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.710260   45407 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0914 22:47:23.710346   45407 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:23.917134   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:23.929080   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:23.929396   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0914 22:47:23.935684   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:23.936236   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:23.937239   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:23.937622   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.006429   45407 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0914 22:47:24.006479   45407 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.006524   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.102547   45407 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0914 22:47:24.102597   45407 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.102641   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201012   45407 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0914 22:47:24.201050   45407 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.201100   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201106   45407 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0914 22:47:24.201138   45407 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.201156   45407 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0914 22:47:24.201203   45407 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.201227   45407 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0914 22:47:24.201282   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0914 22:47:24.201294   45407 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.201329   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201236   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201180   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:24.201250   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0914 22:47:24.206295   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 22:47:24.263389   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0914 22:47:24.263451   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0914 22:47:24.263501   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0914 22:47:24.263513   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:24.263534   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0914 22:47:24.263573   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0914 22:47:24.263665   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.273844   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0914 22:47:24.273932   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:24.338823   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0914 22:47:24.338944   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:24.344560   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0914 22:47:24.344580   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0914 22:47:24.344594   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344635   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0914 22:47:24.344659   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:24.344678   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0914 22:47:24.344723   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0914 22:47:24.344745   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0914 22:47:24.344816   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:24.346975   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0914 22:47:24.953835   45407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:20.549479   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:20.549585   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:20.563121   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.049732   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.049807   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.061447   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:21.549012   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:21.549073   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:21.561653   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.049517   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.049582   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.062280   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:22.549943   46713 api_server.go:166] Checking apiserver status ...
	I0914 22:47:22.550017   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:22.562654   46713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:23.024019   46713 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:23.024043   46713 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:23.024054   46713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:23.024101   46713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:23.060059   46713 cri.go:89] found id: ""
	I0914 22:47:23.060116   46713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:23.078480   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:23.087665   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:23.087714   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096513   46713 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:23.096535   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:23.205072   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.081881   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.285041   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.364758   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:24.468127   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:24.468201   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:24.483354   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.007133   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:25.507231   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:23.992945   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.492600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:24.475872   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.978889   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:26.317110   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.97244294s)
	I0914 22:47:26.317145   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0914 22:47:26.317167   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317174   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.972489589s)
	I0914 22:47:26.317202   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0914 22:47:26.317215   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0914 22:47:26.317248   45407 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.363386448s)
	I0914 22:47:26.317281   45407 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 22:47:26.317319   45407 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.317366   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:47:26.317213   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.972376756s)
	I0914 22:47:26.317426   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0914 22:47:28.397989   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (2.080744487s)
	I0914 22:47:28.398021   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0914 22:47:28.398031   45407 ssh_runner.go:235] Completed: which crictl: (2.080647539s)
	I0914 22:47:28.398048   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398093   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0914 22:47:28.398095   45407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:47:26.006554   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:26.032232   46713 api_server.go:72] duration metric: took 1.564104415s to wait for apiserver process to appear ...
	I0914 22:47:26.032255   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:26.032270   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:28.992292   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.490442   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.033000   46713 api_server.go:269] stopped: https://192.168.72.70:8443/healthz: Get "https://192.168.72.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 22:47:31.033044   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:31.568908   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:47:31.568937   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:47:32.069915   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.080424   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.080456   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:32.570110   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:32.580879   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0914 22:47:32.580918   46713 api_server.go:103] status: https://192.168.72.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0914 22:47:33.069247   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:47:33.077664   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:47:33.086933   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:47:33.086960   46713 api_server.go:131] duration metric: took 7.054699415s to wait for apiserver health ...
	I0914 22:47:33.086973   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:47:33.086981   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:33.088794   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:29.476304   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:31.975459   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:30.974281   45407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.57612291s)
	I0914 22:47:30.974347   45407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 22:47:30.974381   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.576263058s)
	I0914 22:47:30.974403   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0914 22:47:30.974427   45407 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:30.974455   45407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:30.974470   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0914 22:47:33.737309   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.762815322s)
	I0914 22:47:33.737355   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0914 22:47:33.737379   45407 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.737322   45407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.762844826s)
	I0914 22:47:33.737464   45407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 22:47:33.737436   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0914 22:47:33.090357   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:47:33.103371   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:47:33.123072   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:47:33.133238   46713 system_pods.go:59] 7 kube-system pods found
	I0914 22:47:33.133268   46713 system_pods.go:61] "coredns-5644d7b6d9-8sbjk" [638464d2-96db-460d-bf82-0ee79df816da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:47:33.133278   46713 system_pods.go:61] "etcd-old-k8s-version-930717" [4b38f48a-fc4a-43d5-a2b4-414aff712c1b] Running
	I0914 22:47:33.133286   46713 system_pods.go:61] "kube-apiserver-old-k8s-version-930717" [523a3adc-8c68-4980-8a53-133476ce2488] Running
	I0914 22:47:33.133294   46713 system_pods.go:61] "kube-controller-manager-old-k8s-version-930717" [36fd7e01-4a5d-446f-8370-f7a7e886571c] Running
	I0914 22:47:33.133306   46713 system_pods.go:61] "kube-proxy-l4qz4" [c61d0471-0a9e-4662-b723-39944c8b3c31] Running
	I0914 22:47:33.133314   46713 system_pods.go:61] "kube-scheduler-old-k8s-version-930717" [f6d45807-c7f2-4545-b732-45dbd945c660] Running
	I0914 22:47:33.133323   46713 system_pods.go:61] "storage-provisioner" [2956bea1-80f8-4f61-a635-4332d4e3042e] Running
	I0914 22:47:33.133331   46713 system_pods.go:74] duration metric: took 10.233824ms to wait for pod list to return data ...
	I0914 22:47:33.133343   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:47:33.137733   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:47:33.137765   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:47:33.137776   46713 node_conditions.go:105] duration metric: took 4.42667ms to run NodePressure ...
	I0914 22:47:33.137795   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:33.590921   46713 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:47:33.597720   46713 retry.go:31] will retry after 159.399424ms: kubelet not initialised
	I0914 22:47:33.767747   46713 retry.go:31] will retry after 191.717885ms: kubelet not initialised
	I0914 22:47:33.967120   46713 retry.go:31] will retry after 382.121852ms: kubelet not initialised
	I0914 22:47:34.354106   46713 retry.go:31] will retry after 1.055800568s: kubelet not initialised
	I0914 22:47:35.413704   46713 retry.go:31] will retry after 1.341728619s: kubelet not initialised
	I0914 22:47:33.993188   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.491280   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:34.475254   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.977175   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:36.760804   46713 retry.go:31] will retry after 2.668611083s: kubelet not initialised
	I0914 22:47:39.434688   46713 retry.go:31] will retry after 2.1019007s: kubelet not initialised
	I0914 22:47:38.994051   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.490913   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:38.998980   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:41.474686   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:40.530763   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.793268381s)
	I0914 22:47:40.530793   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0914 22:47:40.530820   45407 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:40.530881   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0914 22:47:41.888277   45407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.357355595s)
	I0914 22:47:41.888305   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0914 22:47:41.888338   45407 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:41.888405   45407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 22:47:42.537191   45407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17243-6287/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 22:47:42.537244   45407 cache_images.go:123] Successfully loaded all cached images
	I0914 22:47:42.537251   45407 cache_images.go:92] LoadImages completed in 18.828927203s
	I0914 22:47:42.537344   45407 ssh_runner.go:195] Run: crio config
	I0914 22:47:42.594035   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:47:42.594056   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:47:42.594075   45407 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:47:42.594098   45407 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-344363 NodeName:no-preload-344363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:47:42.594272   45407 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-344363"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:47:42.594383   45407 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-344363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:47:42.594449   45407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:47:42.604172   45407 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:47:42.604243   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:47:42.612570   45407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0914 22:47:42.628203   45407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:47:42.643625   45407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0914 22:47:42.658843   45407 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0914 22:47:42.661922   45407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:47:42.672252   45407 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363 for IP: 192.168.39.60
	I0914 22:47:42.672279   45407 certs.go:190] acquiring lock for shared ca certs: {Name:mkbcd012d4386a08306d0e0209ddbb4c566b10f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:47:42.672420   45407 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key
	I0914 22:47:42.672462   45407 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key
	I0914 22:47:42.672536   45407 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.key
	I0914 22:47:42.672630   45407 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key.a014e791
	I0914 22:47:42.672693   45407 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key
	I0914 22:47:42.672828   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem (1338 bytes)
	W0914 22:47:42.672867   45407 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485_empty.pem, impossibly tiny 0 bytes
	I0914 22:47:42.672879   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:47:42.672915   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:47:42.672948   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:47:42.672982   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/certs/home/jenkins/minikube-integration/17243-6287/.minikube/certs/key.pem (1679 bytes)
	I0914 22:47:42.673044   45407 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem (1708 bytes)
	I0914 22:47:42.673593   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:47:42.695080   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:47:42.716844   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:47:42.746475   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0914 22:47:42.769289   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:47:42.790650   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 22:47:42.811665   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:47:42.833241   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:47:42.853851   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/certs/13485.pem --> /usr/share/ca-certificates/13485.pem (1338 bytes)
	I0914 22:47:42.875270   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/ssl/certs/134852.pem --> /usr/share/ca-certificates/134852.pem (1708 bytes)
	I0914 22:47:42.896913   45407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-6287/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:47:42.917370   45407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:47:42.934549   45407 ssh_runner.go:195] Run: openssl version
	I0914 22:47:42.939762   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13485.pem && ln -fs /usr/share/ca-certificates/13485.pem /etc/ssl/certs/13485.pem"
	I0914 22:47:42.949829   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954155   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 21:46 /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.954204   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13485.pem
	I0914 22:47:42.959317   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13485.pem /etc/ssl/certs/51391683.0"
	I0914 22:47:42.968463   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134852.pem && ln -fs /usr/share/ca-certificates/134852.pem /etc/ssl/certs/134852.pem"
	I0914 22:47:42.979023   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983436   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 21:46 /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.983502   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134852.pem
	I0914 22:47:42.988655   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134852.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:47:42.998288   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:47:43.007767   45407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011865   45407 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 21:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.011940   45407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:47:43.016837   45407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:47:43.026372   45407 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:47:43.030622   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 22:47:43.036026   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 22:47:43.041394   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 22:47:43.046608   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 22:47:43.051675   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 22:47:43.056621   45407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 22:47:43.061552   45407 kubeadm.go:404] StartCluster: {Name:no-preload-344363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-344363 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:47:43.061645   45407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:47:43.061700   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:43.090894   45407 cri.go:89] found id: ""
	I0914 22:47:43.090957   45407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:47:43.100715   45407 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 22:47:43.100732   45407 kubeadm.go:636] restartCluster start
	I0914 22:47:43.100782   45407 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 22:47:43.109233   45407 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.110217   45407 kubeconfig.go:92] found "no-preload-344363" server: "https://192.168.39.60:8443"
	I0914 22:47:43.112442   45407 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 22:47:43.120580   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.120619   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.131224   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.131238   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.131292   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.140990   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:43.641661   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:43.641753   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:43.653379   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.142002   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.142077   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.154194   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:44.641806   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:44.641931   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:44.653795   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:41.541334   46713 retry.go:31] will retry after 2.553142131s: kubelet not initialised
	I0914 22:47:44.100647   46713 retry.go:31] will retry after 6.538244211s: kubelet not initialised
	I0914 22:47:43.995757   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.490438   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:43.974300   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:46.474137   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:45.141728   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.141816   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.153503   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:45.641693   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:45.641775   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:45.653204   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.141748   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.141838   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.153035   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:46.641294   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:46.641386   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:46.653144   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.141813   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.141915   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.152408   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:47.641793   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:47.641872   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:47.653228   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.141212   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.141304   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.152568   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.641805   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:48.641881   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:48.652184   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.141839   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.141909   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.152921   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:49.642082   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:49.642160   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:49.656837   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:48.991209   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:51.492672   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:48.973567   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.974964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:52.975525   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:50.141324   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.141399   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.153003   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:50.642032   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:50.642113   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:50.653830   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.141403   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.141486   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.152324   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:51.641932   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:51.642027   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:51.653279   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.141928   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.141998   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.152653   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:52.641151   45407 api_server.go:166] Checking apiserver status ...
	I0914 22:47:52.641239   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 22:47:52.652312   45407 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 22:47:53.121389   45407 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 22:47:53.121422   45407 kubeadm.go:1128] stopping kube-system containers ...
	I0914 22:47:53.121436   45407 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 22:47:53.121511   45407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:47:53.150615   45407 cri.go:89] found id: ""
	I0914 22:47:53.150681   45407 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 22:47:53.164511   45407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:47:53.173713   45407 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:47:53.173778   45407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183776   45407 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 22:47:53.183797   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:53.310974   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.230246   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.409237   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.474183   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:47:54.572433   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:47:54.572581   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:54.584938   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:50.644922   46713 retry.go:31] will retry after 11.248631638s: kubelet not initialised
	I0914 22:47:53.990630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.990661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.475037   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:57.475941   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:55.098638   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:55.599218   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.099188   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.598826   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:47:56.621701   45407 api_server.go:72] duration metric: took 2.049267478s to wait for apiserver process to appear ...
	I0914 22:47:56.621729   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:47:56.621749   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622263   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:56.622301   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:47:56.622682   45407 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0914 22:47:57.123404   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.433050   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 22:48:00.433082   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 22:48:00.433096   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.467030   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.467073   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:00.623319   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:00.633882   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:00.633912   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.123559   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.128661   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.128691   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:01.623201   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:01.629775   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 22:48:01.629804   45407 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 22:48:02.123439   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:48:02.131052   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:48:02.141185   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:48:02.141213   45407 api_server.go:131] duration metric: took 5.519473898s to wait for apiserver health ...
	I0914 22:48:02.141222   45407 cni.go:84] Creating CNI manager for ""
	I0914 22:48:02.141228   45407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:48:02.143254   45407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:47:57.992038   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:47:59.992600   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:02.144756   45407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:48:02.158230   45407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:48:02.182382   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:48:02.204733   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:48:02.204786   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:48:02.204801   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 22:48:02.204817   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 22:48:02.204834   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 22:48:02.204847   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:48:02.204859   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 22:48:02.204876   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:48:02.204887   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:48:02.204900   45407 system_pods.go:74] duration metric: took 22.491699ms to wait for pod list to return data ...
	I0914 22:48:02.204913   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:48:02.208661   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:48:02.208692   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:48:02.208706   45407 node_conditions.go:105] duration metric: took 3.7844ms to run NodePressure ...
	I0914 22:48:02.208731   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 22:48:02.454257   45407 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458848   45407 kubeadm.go:787] kubelet initialised
	I0914 22:48:02.458868   45407 kubeadm.go:788] duration metric: took 4.585034ms waiting for restarted kubelet to initialise ...
	I0914 22:48:02.458874   45407 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:02.464634   45407 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.471350   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471371   45407 pod_ready.go:81] duration metric: took 6.714087ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.471379   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.471387   45407 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.476977   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.476998   45407 pod_ready.go:81] duration metric: took 5.604627ms waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.477009   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "etcd-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.477019   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.483218   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483236   45407 pod_ready.go:81] duration metric: took 6.211697ms waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.483244   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-apiserver-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.483256   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.589184   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589217   45407 pod_ready.go:81] duration metric: took 105.950074ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.589227   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.589236   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:02.987051   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987081   45407 pod_ready.go:81] duration metric: took 397.836385ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:02.987094   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-proxy-zzkbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:02.987103   45407 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.392835   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392865   45407 pod_ready.go:81] duration metric: took 405.754351ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.392876   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "kube-scheduler-no-preload-344363" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.392886   45407 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:03.786615   45407 pod_ready.go:97] node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786641   45407 pod_ready.go:81] duration metric: took 393.746366ms waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:48:03.786652   45407 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-344363" hosting pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:03.786660   45407 pod_ready.go:38] duration metric: took 1.327778716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:03.786676   45407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:48:03.798081   45407 ops.go:34] apiserver oom_adj: -16
	I0914 22:48:03.798101   45407 kubeadm.go:640] restartCluster took 20.697363165s
	I0914 22:48:03.798107   45407 kubeadm.go:406] StartCluster complete in 20.736562339s
	I0914 22:48:03.798121   45407 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.798193   45407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:48:03.799954   45407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:48:03.800200   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:48:03.800299   45407 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:48:03.800368   45407 addons.go:69] Setting storage-provisioner=true in profile "no-preload-344363"
	I0914 22:48:03.800449   45407 addons.go:231] Setting addon storage-provisioner=true in "no-preload-344363"
	W0914 22:48:03.800462   45407 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:48:03.800511   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800394   45407 addons.go:69] Setting metrics-server=true in profile "no-preload-344363"
	I0914 22:48:03.800543   45407 addons.go:231] Setting addon metrics-server=true in "no-preload-344363"
	W0914 22:48:03.800558   45407 addons.go:240] addon metrics-server should already be in state true
	I0914 22:48:03.800590   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.800388   45407 addons.go:69] Setting default-storageclass=true in profile "no-preload-344363"
	I0914 22:48:03.800633   45407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-344363"
	I0914 22:48:03.800411   45407 config.go:182] Loaded profile config "no-preload-344363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:48:03.800906   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800909   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.800944   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.801011   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.801054   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.800968   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.804911   45407 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-344363" context rescaled to 1 replicas
	I0914 22:48:03.804946   45407 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:48:03.807503   45407 out.go:177] * Verifying Kubernetes components...
	I0914 22:47:59.973913   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:01.974625   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:03.808768   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:48:03.816774   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0914 22:48:03.816773   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0914 22:48:03.817265   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817518   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.817791   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.817821   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818011   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.818032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.818223   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818407   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.818431   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.818976   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.819027   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.829592   45407 addons.go:231] Setting addon default-storageclass=true in "no-preload-344363"
	W0914 22:48:03.829614   45407 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:48:03.829641   45407 host.go:66] Checking if "no-preload-344363" exists ...
	I0914 22:48:03.830013   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.830047   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.835514   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0914 22:48:03.835935   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.836447   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.836473   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.836841   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.837011   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.838909   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.843677   45407 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:48:03.845231   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:48:03.845246   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:48:03.845261   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.844291   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I0914 22:48:03.845685   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.846224   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.846242   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.846572   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.847073   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.847103   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.847332   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0914 22:48:03.848400   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.848666   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849160   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.849182   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.849263   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.849283   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.849314   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.849461   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.849570   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.849635   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.849682   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.850555   45407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:48:03.850585   45407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:48:03.863035   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0914 22:48:03.863559   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864010   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864032   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.864204   45407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0914 22:48:03.864478   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.864526   45407 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:48:03.864752   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.864936   45407 main.go:141] libmachine: Using API Version  1
	I0914 22:48:03.864955   45407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:48:03.865261   45407 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:48:03.865489   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetState
	I0914 22:48:03.866474   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.868300   45407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:48:03.867504   45407 main.go:141] libmachine: (no-preload-344363) Calling .DriverName
	I0914 22:48:03.869841   45407 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:03.869855   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:48:03.869874   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.870067   45407 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:03.870078   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:48:03.870091   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHHostname
	I0914 22:48:03.873462   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.873859   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.873882   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874026   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874114   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.874181   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.874287   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.874397   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.874903   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHPort
	I0914 22:48:03.874949   45407 main.go:141] libmachine: (no-preload-344363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ec:3d", ip: ""} in network mk-no-preload-344363: {Iface:virbr3 ExpiryTime:2023-09-14 23:47:15 +0000 UTC Type:0 Mac:52:54:00:de:ec:3d Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:no-preload-344363 Clientid:01:52:54:00:de:ec:3d}
	I0914 22:48:03.874980   45407 main.go:141] libmachine: (no-preload-344363) DBG | domain no-preload-344363 has defined IP address 192.168.39.60 and MAC address 52:54:00:de:ec:3d in network mk-no-preload-344363
	I0914 22:48:03.875135   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHKeyPath
	I0914 22:48:03.875301   45407 main.go:141] libmachine: (no-preload-344363) Calling .GetSSHUsername
	I0914 22:48:03.875486   45407 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/no-preload-344363/id_rsa Username:docker}
	I0914 22:48:03.956934   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:48:03.956956   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:48:03.973872   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:48:03.973896   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:48:04.002028   45407 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.002051   45407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:48:04.018279   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:48:04.037990   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:48:04.047125   45407 node_ready.go:35] waiting up to 6m0s for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:04.047292   45407 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 22:48:04.086299   45407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:48:04.991926   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.991952   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992225   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992292   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992324   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992342   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992364   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992614   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992634   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:04.992649   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:04.992657   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:04.992665   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:04.992914   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:04.992933   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:01.898769   46713 retry.go:31] will retry after 9.475485234s: kubelet not initialised
	I0914 22:48:05.528027   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.490009157s)
	I0914 22:48:05.528078   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528087   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528435   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528457   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528470   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.528436   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.528481   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.528802   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.528824   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.528829   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.600274   45407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.51392997s)
	I0914 22:48:05.600338   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600351   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.600645   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.600670   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.600682   45407 main.go:141] libmachine: Making call to close driver server
	I0914 22:48:05.600695   45407 main.go:141] libmachine: (no-preload-344363) Calling .Close
	I0914 22:48:05.602502   45407 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:48:05.602513   45407 main.go:141] libmachine: (no-preload-344363) DBG | Closing plugin on server side
	I0914 22:48:05.602524   45407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:48:05.602546   45407 addons.go:467] Verifying addon metrics-server=true in "no-preload-344363"
	I0914 22:48:05.604330   45407 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 22:48:02.491577   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.995014   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:04.474529   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:06.474964   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:05.605648   45407 addons.go:502] enable addons completed in 1.805353931s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 22:48:06.198114   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:08.199023   45407 node_ready.go:58] node "no-preload-344363" has status "Ready":"False"
	I0914 22:48:07.490770   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:09.991693   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:08.974469   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:11.474711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:10.698198   45407 node_ready.go:49] node "no-preload-344363" has status "Ready":"True"
	I0914 22:48:10.698218   45407 node_ready.go:38] duration metric: took 6.651066752s waiting for node "no-preload-344363" to be "Ready" ...
	I0914 22:48:10.698227   45407 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:10.704694   45407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710103   45407 pod_ready.go:92] pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:10.710119   45407 pod_ready.go:81] duration metric: took 5.400404ms waiting for pod "coredns-5dd5756b68-rntdg" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:10.710128   45407 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.747445   45407 pod_ready.go:102] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.229927   45407 pod_ready.go:92] pod "etcd-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:13.229953   45407 pod_ready.go:81] duration metric: took 2.519818297s waiting for pod "etcd-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:13.229966   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747126   45407 pod_ready.go:92] pod "kube-apiserver-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.747147   45407 pod_ready.go:81] duration metric: took 1.51717338s waiting for pod "kube-apiserver-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.747157   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752397   45407 pod_ready.go:92] pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:14.752413   45407 pod_ready.go:81] duration metric: took 5.250049ms waiting for pod "kube-controller-manager-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.752420   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.380752   46713 kubeadm.go:787] kubelet initialised
	I0914 22:48:11.380783   46713 kubeadm.go:788] duration metric: took 37.789831498s waiting for restarted kubelet to initialise ...
	I0914 22:48:11.380793   46713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:48:11.386189   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392948   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.392970   46713 pod_ready.go:81] duration metric: took 6.75113ms waiting for pod "coredns-5644d7b6d9-8sbjk" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.392981   46713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398606   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.398627   46713 pod_ready.go:81] duration metric: took 5.638835ms waiting for pod "coredns-5644d7b6d9-gpb4d" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.398639   46713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404145   46713 pod_ready.go:92] pod "etcd-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.404174   46713 pod_ready.go:81] duration metric: took 5.527173ms waiting for pod "etcd-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.404187   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409428   46713 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.409448   46713 pod_ready.go:81] duration metric: took 5.252278ms waiting for pod "kube-apiserver-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.409461   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779225   46713 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:11.779252   46713 pod_ready.go:81] duration metric: took 369.782336ms waiting for pod "kube-controller-manager-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:11.779267   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179256   46713 pod_ready.go:92] pod "kube-proxy-l4qz4" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.179277   46713 pod_ready.go:81] duration metric: took 400.003039ms waiting for pod "kube-proxy-l4qz4" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.179286   46713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578889   46713 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:12.578921   46713 pod_ready.go:81] duration metric: took 399.627203ms waiting for pod "kube-scheduler-old-k8s-version-930717" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:12.578935   46713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:12.491274   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:14.991146   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.991799   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:13.974725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.473917   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.474722   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:15.099588   45407 pod_ready.go:92] pod "kube-proxy-zzkbp" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.099612   45407 pod_ready.go:81] duration metric: took 347.18498ms waiting for pod "kube-proxy-zzkbp" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.099623   45407 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498642   45407 pod_ready.go:92] pod "kube-scheduler-no-preload-344363" in "kube-system" namespace has status "Ready":"True"
	I0914 22:48:15.498664   45407 pod_ready.go:81] duration metric: took 399.034277ms waiting for pod "kube-scheduler-no-preload-344363" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:15.498678   45407 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	I0914 22:48:17.806138   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:16.887157   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:19.390361   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:18.991911   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.993133   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.474578   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:20.305450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:22.305521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:24.306131   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:21.885143   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.886722   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:23.490126   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.991185   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:25.974547   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.473850   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.805651   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.306125   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:26.384992   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:28.385266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.385877   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:27.991827   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:29.991995   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:30.475603   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.974568   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:31.806483   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.306121   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.886341   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.385506   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:32.488948   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:34.490950   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.989621   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:35.474815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.973407   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:36.806806   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.806988   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:37.886043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.386865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:38.991151   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:41.491384   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:39.974109   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.473010   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:40.808362   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.305126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:42.886094   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.386710   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:43.991121   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.992500   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:44.475120   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:46.973837   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:45.305212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.305740   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.806334   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:47.886380   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.887578   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:48.490416   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:50.990196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:49.474209   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.474657   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.808853   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.305742   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:51.888488   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.385591   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:52.990333   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:54.991549   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:53.974301   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:55.976250   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.474372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.807759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.304597   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:56.885164   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:58.885809   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:57.491267   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:48:59.492043   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.991231   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:00.974064   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:02.975136   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.808275   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:01.385492   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.385865   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:05.386266   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:03.992513   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.490253   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:04.975537   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.473413   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:06.306066   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.805711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:07.886495   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.386100   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:08.995545   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.490960   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:09.476367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:11.974480   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:10.807870   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.306759   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:12.386166   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:14.886545   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.990090   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.489864   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:13.975102   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:16.474761   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.475314   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:15.809041   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.305700   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:17.385490   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:19.386201   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:18.490727   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.493813   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.973383   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.973978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:20.306906   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.805781   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.806417   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:21.387171   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:23.394663   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:22.989981   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.998602   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:24.975048   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.473804   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.306160   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.805993   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:25.886256   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:28.385307   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:30.386473   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:27.490860   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.991665   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.992373   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:29.475815   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:31.973092   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.305648   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.806797   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:32.886577   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.386203   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:34.490086   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:36.490465   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:33.973662   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:35.974041   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.473275   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.306848   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.806295   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:37.388154   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:39.886447   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:38.490850   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.989734   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:40.473543   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.473711   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:41.807197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.305572   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.385788   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.386844   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:42.995794   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:45.490630   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:44.474251   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.974425   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.306070   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.805530   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:46.886095   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:48.888504   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:47.491269   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.990921   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:49.474354   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.973552   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:50.806526   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.807021   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:51.385411   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.385825   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:52.490166   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:54.991982   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:53.974372   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:56.473350   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.305863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.306450   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.308315   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:55.886560   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.886950   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.386043   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:57.490604   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:59.490811   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.993715   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:49:58.973152   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:00.975078   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.474589   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:01.806409   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:03.806552   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:02.387458   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.886066   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:04.490551   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:06.490632   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.974290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.974714   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:05.810256   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.305443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:07.386252   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:09.887808   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:08.490994   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.990417   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.474207   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.973759   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:10.305662   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.807626   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.385387   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.386055   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:12.991196   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.489856   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:14.974362   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.474890   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:15.305348   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.306521   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.306661   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:16.386682   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:18.386805   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:17.491969   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.990884   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.991904   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:19.476052   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.973290   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:21.806863   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.810113   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:20.886118   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.388653   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:24.490861   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.991437   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:23.974132   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.474556   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:26.307894   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.809126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:25.885409   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:27.886080   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.386151   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:29.489358   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.491041   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:28.973725   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:30.975342   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.474590   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:31.306171   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.307126   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:32.386190   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:34.886414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:33.491383   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.492155   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.974978   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:38.473506   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:35.307221   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.806174   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.386235   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.886579   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:37.990447   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:39.991649   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.474117   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.973778   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:40.308130   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.806411   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.807765   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.385199   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.387102   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:42.491019   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.993076   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:44.974689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.473863   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.305509   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.305825   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:46.885280   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.385189   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:47.491661   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.989457   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.991512   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:49.973709   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.976112   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.306459   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.805441   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:51.386498   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:53.887424   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.492074   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.989668   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:54.473073   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.473689   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.474597   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:55.806711   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.305434   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:56.386640   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.885298   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:50:58.995348   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:01.491262   45954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.974371   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.474367   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.305803   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.806120   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:04.807184   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:00.886357   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:02.887274   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:05.386976   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:03.708637   45954 pod_ready.go:81] duration metric: took 4m0.000105295s waiting for pod "metrics-server-57f55c9bc5-hfgp8" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:03.708672   45954 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:03.708681   45954 pod_ready.go:38] duration metric: took 4m6.567418041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:03.708699   45954 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:51:03.708739   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:03.708804   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:03.759664   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:03.759688   45954 cri.go:89] found id: ""
	I0914 22:51:03.759697   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:03.759753   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.764736   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:03.764789   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:03.800251   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:03.800280   45954 cri.go:89] found id: ""
	I0914 22:51:03.800290   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:03.800341   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.804761   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:03.804818   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:03.847136   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:03.847162   45954 cri.go:89] found id: ""
	I0914 22:51:03.847172   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:03.847215   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.851253   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:03.851325   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:03.882629   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:03.882654   45954 cri.go:89] found id: ""
	I0914 22:51:03.882664   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:03.882713   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.887586   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:03.887642   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:03.916702   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:03.916723   45954 cri.go:89] found id: ""
	I0914 22:51:03.916730   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:03.916773   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.921172   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:03.921232   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:03.950593   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:03.950618   45954 cri.go:89] found id: ""
	I0914 22:51:03.950628   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:03.950689   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:03.954303   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:03.954366   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:03.982565   45954 cri.go:89] found id: ""
	I0914 22:51:03.982588   45954 logs.go:284] 0 containers: []
	W0914 22:51:03.982597   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:03.982604   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:03.982662   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:04.011932   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.011957   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:04.011964   45954 cri.go:89] found id: ""
	I0914 22:51:04.011972   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:04.012026   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.016091   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:04.019830   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:04.019852   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:04.061469   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:04.061494   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:04.092823   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:04.092846   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:04.156150   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:04.156190   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:04.169879   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:04.169920   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:04.226165   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:04.226198   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:04.255658   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:04.255692   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:04.299368   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:04.299401   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:04.440433   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:04.440467   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:04.477396   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:04.477425   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:04.513399   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:04.513431   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:05.016889   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:05.016925   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:05.067712   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:05.067749   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:05.973423   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.973637   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.307754   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.805419   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.389465   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:09.885150   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:07.597529   45954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:51:07.614053   45954 api_server.go:72] duration metric: took 4m15.435815174s to wait for apiserver process to appear ...
	I0914 22:51:07.614076   45954 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:51:07.614106   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:07.614155   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:07.643309   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:07.643333   45954 cri.go:89] found id: ""
	I0914 22:51:07.643342   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:07.643411   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.647434   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:07.647511   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:07.676943   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:07.676959   45954 cri.go:89] found id: ""
	I0914 22:51:07.676966   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:07.677006   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.681053   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:07.681101   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:07.714710   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:07.714736   45954 cri.go:89] found id: ""
	I0914 22:51:07.714745   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:07.714807   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.718900   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:07.718966   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:07.754786   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:07.754808   45954 cri.go:89] found id: ""
	I0914 22:51:07.754815   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:07.754867   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.759623   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:07.759693   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:07.794366   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:07.794389   45954 cri.go:89] found id: ""
	I0914 22:51:07.794398   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:07.794457   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.798717   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:07.798777   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:07.831131   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:07.831158   45954 cri.go:89] found id: ""
	I0914 22:51:07.831167   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:07.831227   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.835696   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:07.835762   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:07.865802   45954 cri.go:89] found id: ""
	I0914 22:51:07.865831   45954 logs.go:284] 0 containers: []
	W0914 22:51:07.865841   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:07.865849   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:07.865905   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:07.895025   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:07.895049   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:07.895056   45954 cri.go:89] found id: ""
	I0914 22:51:07.895064   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:07.895118   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.899230   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:07.903731   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:07.903751   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:08.033922   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:08.033952   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:08.068784   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:08.068812   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:08.120395   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:08.120428   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:08.133740   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:08.133763   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:08.173288   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:08.173320   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:08.203964   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:08.203988   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:08.732327   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:08.732367   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:08.784110   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:08.784150   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:08.819179   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:08.819210   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:08.866612   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:08.866644   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:08.900892   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:08.900939   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:08.950563   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:08.950593   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:11.505428   45954 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I0914 22:51:11.511228   45954 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I0914 22:51:11.512855   45954 api_server.go:141] control plane version: v1.28.1
	I0914 22:51:11.512881   45954 api_server.go:131] duration metric: took 3.898798182s to wait for apiserver health ...
	I0914 22:51:11.512891   45954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:51:11.512911   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:51:11.512954   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:51:11.544538   45954 cri.go:89] found id: "f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:11.544563   45954 cri.go:89] found id: ""
	I0914 22:51:11.544573   45954 logs.go:284] 1 containers: [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019]
	I0914 22:51:11.544629   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.548878   45954 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:51:11.548946   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:51:11.578439   45954 cri.go:89] found id: "95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:11.578464   45954 cri.go:89] found id: ""
	I0914 22:51:11.578473   45954 logs.go:284] 1 containers: [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0]
	I0914 22:51:11.578531   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.582515   45954 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:51:11.582576   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:51:11.611837   45954 cri.go:89] found id: "809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:11.611857   45954 cri.go:89] found id: ""
	I0914 22:51:11.611863   45954 logs.go:284] 1 containers: [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b]
	I0914 22:51:11.611917   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.615685   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:51:11.615744   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:51:11.645850   45954 cri.go:89] found id: "8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:11.645869   45954 cri.go:89] found id: ""
	I0914 22:51:11.645876   45954 logs.go:284] 1 containers: [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c]
	I0914 22:51:11.645914   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.649995   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:51:11.650048   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:51:11.683515   45954 cri.go:89] found id: "da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:11.683541   45954 cri.go:89] found id: ""
	I0914 22:51:11.683550   45954 logs.go:284] 1 containers: [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb]
	I0914 22:51:11.683604   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.687715   45954 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:51:11.687806   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:51:11.721411   45954 cri.go:89] found id: "dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.721428   45954 cri.go:89] found id: ""
	I0914 22:51:11.721434   45954 logs.go:284] 1 containers: [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2]
	I0914 22:51:11.721477   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.725801   45954 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:51:11.725859   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:51:11.760391   45954 cri.go:89] found id: ""
	I0914 22:51:11.760417   45954 logs.go:284] 0 containers: []
	W0914 22:51:11.760427   45954 logs.go:286] No container was found matching "kindnet"
	I0914 22:51:11.760437   45954 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:51:11.760498   45954 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:51:11.792140   45954 cri.go:89] found id: "f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.792162   45954 cri.go:89] found id: "5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:11.792168   45954 cri.go:89] found id: ""
	I0914 22:51:11.792175   45954 logs.go:284] 2 containers: [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc]
	I0914 22:51:11.792234   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.796600   45954 ssh_runner.go:195] Run: which crictl
	I0914 22:51:11.800888   45954 logs.go:123] Gathering logs for kubelet ...
	I0914 22:51:11.800912   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:51:11.863075   45954 logs.go:123] Gathering logs for dmesg ...
	I0914 22:51:11.863106   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:51:11.877744   45954 logs.go:123] Gathering logs for kube-controller-manager [dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2] ...
	I0914 22:51:11.877775   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae1ba10c6d573629ca27357a01ec9fa356dcd072c8492d37e94b96a531a67b2"
	I0914 22:51:11.930381   45954 logs.go:123] Gathering logs for storage-provisioner [f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2] ...
	I0914 22:51:11.930418   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5ece5e451cf6b09c8ff951625fc2aa4ed5abea46d30eb8a5941d47e92e75ff2"
	I0914 22:51:11.961471   45954 logs.go:123] Gathering logs for kube-apiserver [f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019] ...
	I0914 22:51:11.961497   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f149a35f9882655c67b8a52f0db99b10f0957efe181c36dedf2f62f7753dc019"
	I0914 22:51:12.005391   45954 logs.go:123] Gathering logs for coredns [809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b] ...
	I0914 22:51:12.005417   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809210de2cd6417f68596880c98b6f53aa09b6e2794fefacfd9d1acc4612241b"
	I0914 22:51:12.034742   45954 logs.go:123] Gathering logs for kube-scheduler [8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c] ...
	I0914 22:51:12.034771   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e23190d2ef5480b68f284daa479c07e15040e37355c4efcae669bce48b5b28c"
	I0914 22:51:12.064672   45954 logs.go:123] Gathering logs for kube-proxy [da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb] ...
	I0914 22:51:12.064702   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da519760d06f2dfc14a0b255f15a3ec91cbecff35d0dd237b5ccff71d6540abb"
	I0914 22:51:12.095801   45954 logs.go:123] Gathering logs for storage-provisioner [5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc] ...
	I0914 22:51:12.095834   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a644b09188e625ec4fd329f55c34441e8374c245fd445a0e0af69494f757bfc"
	I0914 22:51:12.124224   45954 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:51:12.124260   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:51:09.974433   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.975389   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:11.806380   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.807443   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:12.657331   45954 logs.go:123] Gathering logs for etcd [95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0] ...
	I0914 22:51:12.657375   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95a2e35f25145a8db9493a68aec135d45aadf337f00dd5efb656ed10d1b42df0"
	I0914 22:51:12.718197   45954 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:51:12.718227   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:51:12.845353   45954 logs.go:123] Gathering logs for container status ...
	I0914 22:51:12.845381   45954 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:51:15.395502   45954 system_pods.go:59] 8 kube-system pods found
	I0914 22:51:15.395524   45954 system_pods.go:61] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.395529   45954 system_pods.go:61] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.395534   45954 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.395540   45954 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.395544   45954 system_pods.go:61] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.395548   45954 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.395554   45954 system_pods.go:61] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.395559   45954 system_pods.go:61] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.395565   45954 system_pods.go:74] duration metric: took 3.882669085s to wait for pod list to return data ...
	I0914 22:51:15.395572   45954 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:51:15.398128   45954 default_sa.go:45] found service account: "default"
	I0914 22:51:15.398148   45954 default_sa.go:55] duration metric: took 2.571314ms for default service account to be created ...
	I0914 22:51:15.398155   45954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:51:15.407495   45954 system_pods.go:86] 8 kube-system pods found
	I0914 22:51:15.407517   45954 system_pods.go:89] "coredns-5dd5756b68-8phxz" [45bf5b67-3fc3-4aa7-90a0-2a2957384380] Running
	I0914 22:51:15.407522   45954 system_pods.go:89] "etcd-default-k8s-diff-port-799144" [89e84620-31c0-4afa-a798-f68f71ea74f5] Running
	I0914 22:51:15.407527   45954 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799144" [d8a64809-2162-4dd5-a9e8-c572319818e2] Running
	I0914 22:51:15.407532   45954 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799144" [79a14cac-4087-4ea5-9a7c-87cbf38b1cdc] Running
	I0914 22:51:15.407535   45954 system_pods.go:89] "kube-proxy-j2qmv" [ca04e473-7bc4-4d56-ade1-0ae559f40dc9] Running
	I0914 22:51:15.407540   45954 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799144" [5e615975-fcd3-4a79-863d-4794ce52ff6f] Running
	I0914 22:51:15.407549   45954 system_pods.go:89] "metrics-server-57f55c9bc5-hfgp8" [09b0d4cf-ab11-4677-88c4-f530af4643e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:51:15.407558   45954 system_pods.go:89] "storage-provisioner" [ccb8a357-0b1f-41ad-b5ba-dea4f1a690c7] Running
	I0914 22:51:15.407576   45954 system_pods.go:126] duration metric: took 9.409452ms to wait for k8s-apps to be running ...
	I0914 22:51:15.407587   45954 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:51:15.407633   45954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:15.424728   45954 system_svc.go:56] duration metric: took 17.122868ms WaitForService to wait for kubelet.
	I0914 22:51:15.424754   45954 kubeadm.go:581] duration metric: took 4m23.246518879s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:51:15.424794   45954 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:51:15.428492   45954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:51:15.428520   45954 node_conditions.go:123] node cpu capacity is 2
	I0914 22:51:15.428534   45954 node_conditions.go:105] duration metric: took 3.733977ms to run NodePressure ...
	I0914 22:51:15.428550   45954 start.go:228] waiting for startup goroutines ...
	I0914 22:51:15.428563   45954 start.go:233] waiting for cluster config update ...
	I0914 22:51:15.428576   45954 start.go:242] writing updated cluster config ...
	I0914 22:51:15.428887   45954 ssh_runner.go:195] Run: rm -f paused
	I0914 22:51:15.479711   45954 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:51:15.482387   45954 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799144" cluster and "default" namespace by default
	I0914 22:51:11.885968   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:13.887391   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:14.474188   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.974056   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.306146   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.806037   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:16.386306   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:18.386406   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:19.474446   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:21.474860   46412 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.375841   46412 pod_ready.go:81] duration metric: took 4m0.000552226s waiting for pod "metrics-server-57f55c9bc5-zvk82" in "kube-system" namespace to be "Ready" ...
	E0914 22:51:22.375872   46412 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:51:22.375890   46412 pod_ready.go:38] duration metric: took 4m12.961510371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:51:22.375915   46412 kubeadm.go:640] restartCluster took 4m33.075347594s
	W0914 22:51:22.375983   46412 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:51:22.376022   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:51:20.806249   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:22.807141   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:24.809235   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:20.888098   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:23.386482   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:25.386542   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.305114   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:29.306240   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:27.886695   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:30.385740   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:31.306508   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:33.306655   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:32.886111   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.384925   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:35.805992   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:38.307801   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:37.385344   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:39.888303   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:40.806212   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:43.305815   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:42.388414   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:44.388718   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:45.306197   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:47.806983   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:49.807150   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:46.885737   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:48.885794   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.115476   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.73941793s)
	I0914 22:51:53.115549   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:51:53.128821   46412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:51:53.137267   46412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:51:53.145533   46412 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:51:53.145569   46412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 22:51:53.202279   46412 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 22:51:53.202501   46412 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:51:53.353512   46412 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:51:53.353674   46412 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:51:53.353816   46412 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:51:53.513428   46412 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:51:53.515450   46412 out.go:204]   - Generating certificates and keys ...
	I0914 22:51:53.515574   46412 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:51:53.515672   46412 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:51:53.515785   46412 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:51:53.515896   46412 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:51:53.516234   46412 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:51:53.516841   46412 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:51:53.517488   46412 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:51:53.517974   46412 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:51:53.518563   46412 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:51:53.519109   46412 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:51:53.519728   46412 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:51:53.519809   46412 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:51:53.641517   46412 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:51:53.842920   46412 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:51:53.982500   46412 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:51:54.065181   46412 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:51:54.065678   46412 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:51:54.071437   46412 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:51:52.305643   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.305996   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:51.386246   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:53.386956   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:54.073206   46412 out.go:204]   - Booting up control plane ...
	I0914 22:51:54.073363   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:51:54.073470   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:51:54.073554   46412 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:51:54.091178   46412 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:51:54.091289   46412 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:51:54.091371   46412 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 22:51:54.221867   46412 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:51:56.306473   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:58.306953   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:55.886624   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:51:57.887222   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:00.385756   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.225144   46412 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002879 seconds
	I0914 22:52:02.225314   46412 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:02.244705   46412 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:02.778808   46412 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:02.779047   46412 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-588699 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 22:52:03.296381   46412 kubeadm.go:322] [bootstrap-token] Using token: x2l9oo.p0a5g5jx49srzji3
	I0914 22:52:03.297976   46412 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:03.298091   46412 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:03.308475   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:52:03.319954   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:03.325968   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:03.330375   46412 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:03.334686   46412 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:03.353185   46412 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:52:03.622326   46412 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:03.721359   46412 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:03.721385   46412 kubeadm.go:322] 
	I0914 22:52:03.721472   46412 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:03.721486   46412 kubeadm.go:322] 
	I0914 22:52:03.721589   46412 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:03.721602   46412 kubeadm.go:322] 
	I0914 22:52:03.721623   46412 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:03.721678   46412 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:03.721727   46412 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:03.721764   46412 kubeadm.go:322] 
	I0914 22:52:03.721856   46412 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 22:52:03.721867   46412 kubeadm.go:322] 
	I0914 22:52:03.721945   46412 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 22:52:03.721954   46412 kubeadm.go:322] 
	I0914 22:52:03.722029   46412 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:03.722137   46412 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:03.722240   46412 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:03.722250   46412 kubeadm.go:322] 
	I0914 22:52:03.722367   46412 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:52:03.722468   46412 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:03.722479   46412 kubeadm.go:322] 
	I0914 22:52:03.722583   46412 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.722694   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:03.722719   46412 kubeadm.go:322] 	--control-plane 
	I0914 22:52:03.722752   46412 kubeadm.go:322] 
	I0914 22:52:03.722887   46412 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:03.722912   46412 kubeadm.go:322] 
	I0914 22:52:03.723031   46412 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x2l9oo.p0a5g5jx49srzji3 \
	I0914 22:52:03.723170   46412 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:03.724837   46412 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:03.724867   46412 cni.go:84] Creating CNI manager for ""
	I0914 22:52:03.724879   46412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:03.726645   46412 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:03.728115   46412 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:03.741055   46412 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:03.811746   46412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:03.811823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=embed-certs-588699 minikube.k8s.io/updated_at=2023_09_14T22_52_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:03.811827   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:00.805633   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.805831   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.807503   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:02.885499   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.886940   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:04.097721   46412 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:04.097763   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.186240   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:04.773886   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.273494   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:05.773993   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.274084   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.773309   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.273666   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:07.773916   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.274226   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:08.774073   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:06.807538   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.306062   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:06.886980   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.385212   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:09.274041   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:09.773409   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.274272   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:10.774321   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.274268   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.774250   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.273823   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:12.774000   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.273596   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:13.774284   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:11.806015   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:14.308997   45407 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:11.386087   46713 pod_ready.go:102] pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:12.580003   46713 pod_ready.go:81] duration metric: took 4m0.001053291s waiting for pod "metrics-server-74d5856cc6-6ps2q" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:12.580035   46713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:12.580062   46713 pod_ready.go:38] duration metric: took 4m1.199260232s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:12.580089   46713 kubeadm.go:640] restartCluster took 4m59.591702038s
	W0914 22:52:12.580145   46713 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0914 22:52:12.580169   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 22:52:14.274174   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:14.773472   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.273376   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:15.773286   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.273920   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.773334   46412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:16.926033   46412 kubeadm.go:1081] duration metric: took 13.114277677s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:16.926076   46412 kubeadm.go:406] StartCluster complete in 5m27.664586228s
	I0914 22:52:16.926099   46412 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.926229   46412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:16.928891   46412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:16.929177   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:16.929313   46412 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:16.929393   46412 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-588699"
	I0914 22:52:16.929408   46412 addons.go:69] Setting default-storageclass=true in profile "embed-certs-588699"
	I0914 22:52:16.929423   46412 addons.go:69] Setting metrics-server=true in profile "embed-certs-588699"
	I0914 22:52:16.929435   46412 addons.go:231] Setting addon metrics-server=true in "embed-certs-588699"
	W0914 22:52:16.929446   46412 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:16.929446   46412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-588699"
	I0914 22:52:16.929475   46412 config.go:182] Loaded profile config "embed-certs-588699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:52:16.929508   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929418   46412 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-588699"
	W0914 22:52:16.929533   46412 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:16.929574   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.929907   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929938   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929939   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929963   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.929968   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.929995   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.948975   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41151
	I0914 22:52:16.948990   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0914 22:52:16.948977   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0914 22:52:16.949953   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950006   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.949957   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.950601   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950607   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950620   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950626   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.950632   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.950647   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.951159   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951191   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951410   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.951808   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951829   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.951896   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.951906   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.951911   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.961182   46412 addons.go:231] Setting addon default-storageclass=true in "embed-certs-588699"
	W0914 22:52:16.961207   46412 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:16.961236   46412 host.go:66] Checking if "embed-certs-588699" exists ...
	I0914 22:52:16.961615   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.961637   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.976517   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0914 22:52:16.976730   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0914 22:52:16.977005   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977161   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.977448   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977466   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977564   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.977589   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.977781   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977913   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.977966   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.978108   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:16.980084   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.980429   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:16.982113   46412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:16.983227   46412 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:16.984383   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:16.984394   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:16.984407   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.983307   46412 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:16.984439   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:16.984455   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:16.987850   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0914 22:52:16.987989   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988270   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:16.988771   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:16.988788   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:16.988849   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.988867   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.988894   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.989058   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.989528   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:16.989748   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.990151   46412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:16.990172   46412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:16.990441   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:16.990597   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.990766   46412 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-588699" context rescaled to 1 replicas
	I0914 22:52:16.990794   46412 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:16.992351   46412 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:16.990986   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:16.991129   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:16.994003   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:16.994015   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:16.994097   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:16.994300   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:16.994607   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.007652   46412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0914 22:52:17.008127   46412 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:17.008676   46412 main.go:141] libmachine: Using API Version  1
	I0914 22:52:17.008699   46412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:17.009115   46412 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:17.009301   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetState
	I0914 22:52:17.010905   46412 main.go:141] libmachine: (embed-certs-588699) Calling .DriverName
	I0914 22:52:17.011169   46412 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.011183   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:17.011201   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHHostname
	I0914 22:52:17.014427   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.014837   46412 main.go:141] libmachine: (embed-certs-588699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:60:d3", ip: ""} in network mk-embed-certs-588699: {Iface:virbr1 ExpiryTime:2023-09-14 23:46:34 +0000 UTC Type:0 Mac:52:54:00:a8:60:d3 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:embed-certs-588699 Clientid:01:52:54:00:a8:60:d3}
	I0914 22:52:17.014865   46412 main.go:141] libmachine: (embed-certs-588699) DBG | domain embed-certs-588699 has defined IP address 192.168.61.205 and MAC address 52:54:00:a8:60:d3 in network mk-embed-certs-588699
	I0914 22:52:17.015132   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHPort
	I0914 22:52:17.015299   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHKeyPath
	I0914 22:52:17.015435   46412 main.go:141] libmachine: (embed-certs-588699) Calling .GetSSHUsername
	I0914 22:52:17.015585   46412 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/embed-certs-588699/id_rsa Username:docker}
	I0914 22:52:17.124720   46412 node_ready.go:35] waiting up to 6m0s for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.124831   46412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:17.128186   46412 node_ready.go:49] node "embed-certs-588699" has status "Ready":"True"
	I0914 22:52:17.128211   46412 node_ready.go:38] duration metric: took 3.459847ms waiting for node "embed-certs-588699" to be "Ready" ...
	I0914 22:52:17.128221   46412 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.133021   46412 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138574   46412 pod_ready.go:92] pod "etcd-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.138594   46412 pod_ready.go:81] duration metric: took 5.550933ms waiting for pod "etcd-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.138605   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151548   46412 pod_ready.go:92] pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.151569   46412 pod_ready.go:81] duration metric: took 12.956129ms waiting for pod "kube-apiserver-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.151581   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169368   46412 pod_ready.go:92] pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.169393   46412 pod_ready.go:81] duration metric: took 17.803681ms waiting for pod "kube-controller-manager-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.169406   46412 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.180202   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:17.180227   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:17.184052   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:17.227381   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:17.227411   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:17.233773   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:17.293762   46412 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:17.293788   46412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:17.328911   46412 pod_ready.go:92] pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace has status "Ready":"True"
	I0914 22:52:17.328934   46412 pod_ready.go:81] duration metric: took 159.520585ms waiting for pod "kube-scheduler-embed-certs-588699" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:17.328942   46412 pod_ready.go:38] duration metric: took 200.709608ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:17.328958   46412 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:17.329008   46412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:17.379085   46412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:18.947663   46412 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.822786746s)
	I0914 22:52:18.947705   46412 start.go:917] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:19.171809   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.937996858s)
	I0914 22:52:19.171861   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171872   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.98779094s)
	I0914 22:52:19.171908   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.171927   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171878   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.171875   46412 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.842825442s)
	I0914 22:52:19.172234   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172277   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172292   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172289   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172307   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172322   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172352   46412 api_server.go:72] duration metric: took 2.181532709s to wait for apiserver process to appear ...
	I0914 22:52:19.172322   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172369   46412 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.172377   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172387   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172396   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172410   46412 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I0914 22:52:19.172625   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172643   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172657   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.172667   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.172688   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.172715   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172723   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.172955   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.172969   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.173012   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.205041   46412 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I0914 22:52:19.209533   46412 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:19.209561   46412 api_server.go:131] duration metric: took 37.185195ms to wait for apiserver health ...
	I0914 22:52:19.209573   46412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:19.225866   46412 system_pods.go:59] 7 kube-system pods found
	I0914 22:52:19.225893   46412 system_pods.go:61] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.225900   46412 system_pods.go:61] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.225908   46412 system_pods.go:61] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.225915   46412 system_pods.go:61] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.225921   46412 system_pods.go:61] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.225928   46412 system_pods.go:61] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.225934   46412 system_pods.go:61] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending
	I0914 22:52:19.225947   46412 system_pods.go:74] duration metric: took 16.366454ms to wait for pod list to return data ...
	I0914 22:52:19.225958   46412 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:19.232176   46412 default_sa.go:45] found service account: "default"
	I0914 22:52:19.232202   46412 default_sa.go:55] duration metric: took 6.234795ms for default service account to be created ...
	I0914 22:52:19.232221   46412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:19.238383   46412 system_pods.go:86] 7 kube-system pods found
	I0914 22:52:19.238415   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.238426   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.238433   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.238442   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.238448   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.238454   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.238463   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.238486   46412 retry.go:31] will retry after 271.864835ms: missing components: kube-dns
	I0914 22:52:19.431792   46412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.052667289s)
	I0914 22:52:19.431858   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.431875   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432217   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432254   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432265   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432277   46412 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:19.432291   46412 main.go:141] libmachine: (embed-certs-588699) Calling .Close
	I0914 22:52:19.432561   46412 main.go:141] libmachine: (embed-certs-588699) DBG | Closing plugin on server side
	I0914 22:52:19.432615   46412 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:19.432626   46412 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:19.432637   46412 addons.go:467] Verifying addon metrics-server=true in "embed-certs-588699"
	I0914 22:52:19.434406   46412 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:15.499654   45407 pod_ready.go:81] duration metric: took 4m0.00095032s waiting for pod "metrics-server-57f55c9bc5-swnnf" in "kube-system" namespace to be "Ready" ...
	E0914 22:52:15.499683   45407 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 22:52:15.499692   45407 pod_ready.go:38] duration metric: took 4m4.80145633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:15.499709   45407 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:52:15.499741   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:15.499821   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:15.551531   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:15.551573   45407 cri.go:89] found id: ""
	I0914 22:52:15.551584   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:15.551638   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.555602   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:15.555649   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:15.583476   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:15.583497   45407 cri.go:89] found id: ""
	I0914 22:52:15.583504   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:15.583541   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.587434   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:15.587499   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:15.614791   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:15.614813   45407 cri.go:89] found id: ""
	I0914 22:52:15.614821   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:15.614865   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.618758   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:15.618813   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:15.651772   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:15.651798   45407 cri.go:89] found id: ""
	I0914 22:52:15.651807   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:15.651862   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.656464   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:15.656533   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:15.701258   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:15.701289   45407 cri.go:89] found id: ""
	I0914 22:52:15.701299   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:15.701359   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.705980   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:15.706049   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:15.741616   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:15.741640   45407 cri.go:89] found id: ""
	I0914 22:52:15.741647   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:15.741702   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.745863   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:15.745913   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:15.779362   45407 cri.go:89] found id: ""
	I0914 22:52:15.779385   45407 logs.go:284] 0 containers: []
	W0914 22:52:15.779395   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:15.779403   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:15.779462   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:15.815662   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:15.815691   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.815698   45407 cri.go:89] found id: ""
	I0914 22:52:15.815707   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:15.815781   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.820879   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:15.826312   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:15.826338   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:15.864143   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:15.864175   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:16.401646   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:16.401689   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:16.442964   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:16.443000   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:16.612411   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:16.612444   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:16.664620   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:16.664652   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:16.702405   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:16.702432   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:16.738583   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:16.738615   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:16.752752   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:16.752788   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:16.793883   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:16.793924   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:16.825504   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:16.825531   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:16.879008   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:16.879046   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:16.910902   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:16.910941   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.477726   45407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:52:19.494214   45407 api_server.go:72] duration metric: took 4m15.689238s to wait for apiserver process to appear ...
	I0914 22:52:19.494240   45407 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:52:19.494281   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:19.494341   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:19.534990   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:19.535014   45407 cri.go:89] found id: ""
	I0914 22:52:19.535023   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:19.535081   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.540782   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:19.540850   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:19.570364   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:19.570390   45407 cri.go:89] found id: ""
	I0914 22:52:19.570399   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:19.570465   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.575964   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:19.576027   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:19.608023   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:19.608047   45407 cri.go:89] found id: ""
	I0914 22:52:19.608056   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:19.608098   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.612290   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:19.612343   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:19.644658   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:19.644682   45407 cri.go:89] found id: ""
	I0914 22:52:19.644692   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:19.644743   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.651016   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:19.651092   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:19.693035   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:19.693059   45407 cri.go:89] found id: ""
	I0914 22:52:19.693068   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:19.693122   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.697798   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:19.697864   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:19.733805   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.733828   45407 cri.go:89] found id: ""
	I0914 22:52:19.733837   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:19.733890   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.737902   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:19.737976   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:19.765139   45407 cri.go:89] found id: ""
	I0914 22:52:19.765169   45407 logs.go:284] 0 containers: []
	W0914 22:52:19.765180   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:19.765188   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:19.765248   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:19.793734   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.793756   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:19.793761   45407 cri.go:89] found id: ""
	I0914 22:52:19.793767   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:19.793807   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.797559   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:19.801472   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:19.801492   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:19.937110   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:19.937138   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:19.987564   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:19.987599   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:19.436138   46412 addons.go:502] enable addons completed in 2.506819532s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:19.523044   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.523077   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.523089   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.523096   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.523103   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.523109   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.523115   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.523124   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.523137   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.523164   46412 retry.go:31] will retry after 369.359833ms: missing components: kube-dns
	I0914 22:52:19.900488   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:19.900529   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:19.900541   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:19.900550   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:19.900558   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:19.900564   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:19.900571   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:19.900587   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:19.900608   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:19.900630   46412 retry.go:31] will retry after 329.450987ms: missing components: kube-dns
	I0914 22:52:20.245124   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.245152   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.245160   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.245166   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.245171   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.245177   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.245185   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.245194   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.245204   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.245225   46412 retry.go:31] will retry after 392.738624ms: missing components: kube-dns
	I0914 22:52:20.645671   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:20.645706   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 22:52:20.645716   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:20.645725   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:20.645737   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:20.645747   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:20.645756   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:20.645770   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:20.645783   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:20.645803   46412 retry.go:31] will retry after 463.608084ms: missing components: kube-dns
	I0914 22:52:21.118889   46412 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:21.118920   46412 system_pods.go:89] "coredns-5dd5756b68-ws5b8" [8b20fa8b-7e33-45e9-9e39-adbfbc0890a1] Running
	I0914 22:52:21.118926   46412 system_pods.go:89] "etcd-embed-certs-588699" [47d1a87a-458c-4832-b46d-d71acd316d7b] Running
	I0914 22:52:21.118931   46412 system_pods.go:89] "kube-apiserver-embed-certs-588699" [cd07cd32-27b8-4304-b3b5-8db4401fef36] Running
	I0914 22:52:21.118937   46412 system_pods.go:89] "kube-controller-manager-embed-certs-588699" [c279440b-21c4-45fd-9895-d243f76a98b7] Running
	I0914 22:52:21.118941   46412 system_pods.go:89] "kube-proxy-9gwgv" [d702b24f-9d6e-4650-8892-0be54cb46991] Running
	I0914 22:52:21.118946   46412 system_pods.go:89] "kube-scheduler-embed-certs-588699" [4c8225fd-0a36-4a48-8494-22fa7484c6b7] Running
	I0914 22:52:21.118954   46412 system_pods.go:89] "metrics-server-57f55c9bc5-wb27t" [41d83cd2-a4b5-4b49-99ac-2fa390769083] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:21.118963   46412 system_pods.go:89] "storage-provisioner" [1c40fd3f-cdee-4408-87f1-c732015460c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 22:52:21.118971   46412 system_pods.go:126] duration metric: took 1.886741356s to wait for k8s-apps to be running ...
	I0914 22:52:21.118984   46412 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:21.119025   46412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:21.134331   46412 system_svc.go:56] duration metric: took 15.34035ms WaitForService to wait for kubelet.
	I0914 22:52:21.134358   46412 kubeadm.go:581] duration metric: took 4.143541631s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:21.134381   46412 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:21.137182   46412 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:21.137207   46412 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:21.137230   46412 node_conditions.go:105] duration metric: took 2.834168ms to run NodePressure ...
	I0914 22:52:21.137243   46412 start.go:228] waiting for startup goroutines ...
	I0914 22:52:21.137252   46412 start.go:233] waiting for cluster config update ...
	I0914 22:52:21.137272   46412 start.go:242] writing updated cluster config ...
	I0914 22:52:21.137621   46412 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:21.184252   46412 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:21.186251   46412 out.go:177] * Done! kubectl is now configured to use "embed-certs-588699" cluster and "default" namespace by default
	I0914 22:52:20.022483   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:20.022512   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:20.062375   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:20.062403   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:20.099744   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:20.099776   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:20.129490   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:20.129515   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:20.165896   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:20.165922   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:20.692724   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:20.692758   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:20.761038   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:20.761086   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:20.777087   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:20.777114   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:20.808980   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:20.809020   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:20.845904   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:20.845942   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.393816   45407 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0914 22:52:23.399946   45407 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0914 22:52:23.401251   45407 api_server.go:141] control plane version: v1.28.1
	I0914 22:52:23.401271   45407 api_server.go:131] duration metric: took 3.907024801s to wait for apiserver health ...
	I0914 22:52:23.401279   45407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:52:23.401303   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 22:52:23.401346   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 22:52:23.433871   45407 cri.go:89] found id: "33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:23.433895   45407 cri.go:89] found id: ""
	I0914 22:52:23.433905   45407 logs.go:284] 1 containers: [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043]
	I0914 22:52:23.433962   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.438254   45407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 22:52:23.438317   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 22:52:23.468532   45407 cri.go:89] found id: "db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:23.468555   45407 cri.go:89] found id: ""
	I0914 22:52:23.468564   45407 logs.go:284] 1 containers: [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38]
	I0914 22:52:23.468626   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.473599   45407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 22:52:23.473658   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 22:52:23.509951   45407 cri.go:89] found id: "8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:23.509976   45407 cri.go:89] found id: ""
	I0914 22:52:23.509986   45407 logs.go:284] 1 containers: [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a]
	I0914 22:52:23.510041   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.516637   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 22:52:23.516722   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 22:52:23.549562   45407 cri.go:89] found id: "6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.549587   45407 cri.go:89] found id: ""
	I0914 22:52:23.549596   45407 logs.go:284] 1 containers: [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566]
	I0914 22:52:23.549653   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.553563   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 22:52:23.553626   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 22:52:23.584728   45407 cri.go:89] found id: "eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:23.584749   45407 cri.go:89] found id: ""
	I0914 22:52:23.584756   45407 logs.go:284] 1 containers: [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1]
	I0914 22:52:23.584797   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.588600   45407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 22:52:23.588653   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 22:52:23.616590   45407 cri.go:89] found id: "d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.616609   45407 cri.go:89] found id: ""
	I0914 22:52:23.616617   45407 logs.go:284] 1 containers: [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2]
	I0914 22:52:23.616669   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.620730   45407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 22:52:23.620782   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 22:52:23.648741   45407 cri.go:89] found id: ""
	I0914 22:52:23.648765   45407 logs.go:284] 0 containers: []
	W0914 22:52:23.648773   45407 logs.go:286] No container was found matching "kindnet"
	I0914 22:52:23.648781   45407 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 22:52:23.648831   45407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 22:52:23.680814   45407 cri.go:89] found id: "0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:23.680839   45407 cri.go:89] found id: "a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:23.680846   45407 cri.go:89] found id: ""
	I0914 22:52:23.680854   45407 logs.go:284] 2 containers: [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669]
	I0914 22:52:23.680914   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.685954   45407 ssh_runner.go:195] Run: which crictl
	I0914 22:52:23.690428   45407 logs.go:123] Gathering logs for describe nodes ...
	I0914 22:52:23.690459   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 22:52:23.818421   45407 logs.go:123] Gathering logs for kube-controller-manager [d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2] ...
	I0914 22:52:23.818456   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d670d4deec4bc00bcc12264a8fc3a172a5667f165c88b3767e4a96215af7e5f2"
	I0914 22:52:23.867863   45407 logs.go:123] Gathering logs for kube-scheduler [6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566] ...
	I0914 22:52:23.867894   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fa0d09d74d548ccaf9e15820e19228a379c53e1f8582a3830200106e1572566"
	I0914 22:52:23.903362   45407 logs.go:123] Gathering logs for container status ...
	I0914 22:52:23.903393   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 22:52:23.943793   45407 logs.go:123] Gathering logs for CRI-O ...
	I0914 22:52:23.943820   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 22:52:24.538337   45407 logs.go:123] Gathering logs for storage-provisioner [a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669] ...
	I0914 22:52:24.538390   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a554481de89e79b48e280798e3cc4b670e468ef87e6b48f073810d41dad60669"
	I0914 22:52:24.585031   45407 logs.go:123] Gathering logs for kubelet ...
	I0914 22:52:24.585072   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 22:52:24.639086   45407 logs.go:123] Gathering logs for dmesg ...
	I0914 22:52:24.639120   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 22:52:24.650905   45407 logs.go:123] Gathering logs for kube-apiserver [33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043] ...
	I0914 22:52:24.650925   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33222eae96b0a2e036f6f6807cdfe5ab1993b6a059d5f51f2fcb8e032de9e043"
	I0914 22:52:24.698547   45407 logs.go:123] Gathering logs for etcd [db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38] ...
	I0914 22:52:24.698590   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7177e981567386c6bf4f8292a92bd556ad84f9faf2dee61c7778e46ca0fd38"
	I0914 22:52:24.745590   45407 logs.go:123] Gathering logs for coredns [8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a] ...
	I0914 22:52:24.745619   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a06ddba66f0aec890f25d9273facf2fd4258d6713cb3be45f9327850e5b070a"
	I0914 22:52:24.777667   45407 logs.go:123] Gathering logs for kube-proxy [eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1] ...
	I0914 22:52:24.777697   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb1a03278a771f0331b467a4dd91118cc8d2c0c4f8fe24da9387d6f1036dc0d1"
	I0914 22:52:24.811536   45407 logs.go:123] Gathering logs for storage-provisioner [0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf] ...
	I0914 22:52:24.811565   45407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d6da8266a65b860520bedd961dda2b4eb8d90c97bb29767b0d6128869e64fbf"
	I0914 22:52:25.132299   46713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (12.552094274s)
	I0914 22:52:25.132371   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:25.146754   46713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:52:25.155324   46713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:52:25.164387   46713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:52:25.164429   46713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0914 22:52:25.227970   46713 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0914 22:52:25.228029   46713 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:52:25.376482   46713 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:52:25.376603   46713 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:52:25.376721   46713 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:52:25.536163   46713 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:52:25.536339   46713 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:52:25.543555   46713 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0914 22:52:25.663579   46713 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:52:25.665315   46713 out.go:204]   - Generating certificates and keys ...
	I0914 22:52:25.665428   46713 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:52:25.665514   46713 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:52:25.665610   46713 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 22:52:25.665688   46713 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 22:52:25.665777   46713 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 22:52:25.665844   46713 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 22:52:25.665925   46713 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 22:52:25.666002   46713 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 22:52:25.666095   46713 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 22:52:25.666223   46713 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 22:52:25.666277   46713 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 22:52:25.666352   46713 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:52:25.931689   46713 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:52:26.088693   46713 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:52:26.251867   46713 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:52:26.566157   46713 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:52:26.567520   46713 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:52:27.360740   45407 system_pods.go:59] 8 kube-system pods found
	I0914 22:52:27.360780   45407 system_pods.go:61] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.360788   45407 system_pods.go:61] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.360795   45407 system_pods.go:61] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.360802   45407 system_pods.go:61] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.360809   45407 system_pods.go:61] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.360816   45407 system_pods.go:61] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.360827   45407 system_pods.go:61] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.360841   45407 system_pods.go:61] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.360848   45407 system_pods.go:74] duration metric: took 3.959563404s to wait for pod list to return data ...
	I0914 22:52:27.360859   45407 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:52:27.363690   45407 default_sa.go:45] found service account: "default"
	I0914 22:52:27.363715   45407 default_sa.go:55] duration metric: took 2.849311ms for default service account to be created ...
	I0914 22:52:27.363724   45407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:52:27.372219   45407 system_pods.go:86] 8 kube-system pods found
	I0914 22:52:27.372520   45407 system_pods.go:89] "coredns-5dd5756b68-rntdg" [26064ba4-be5d-45b8-bc54-9af74efb4b1c] Running
	I0914 22:52:27.372552   45407 system_pods.go:89] "etcd-no-preload-344363" [ff80f602-408b-405c-9c35-d780008174ae] Running
	I0914 22:52:27.372571   45407 system_pods.go:89] "kube-apiserver-no-preload-344363" [45d51faa-e79f-4101-9c21-e1416d99d239] Running
	I0914 22:52:27.372590   45407 system_pods.go:89] "kube-controller-manager-no-preload-344363" [f00e3123-e481-418f-b1da-695969132036] Running
	I0914 22:52:27.372602   45407 system_pods.go:89] "kube-proxy-zzkbp" [1d3cfe91-a904-4c1a-834d-261806db97c0] Running
	I0914 22:52:27.372616   45407 system_pods.go:89] "kube-scheduler-no-preload-344363" [ee4f440c-3e65-4623-b0ae-8ad55188ee67] Running
	I0914 22:52:27.372744   45407 system_pods.go:89] "metrics-server-57f55c9bc5-swnnf" [4b0db27e-c36f-452e-8ed5-57027bf9ab99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:52:27.372835   45407 system_pods.go:89] "storage-provisioner" [dafe9e6f-dd6b-4003-9728-d5b0aec14091] Running
	I0914 22:52:27.372845   45407 system_pods.go:126] duration metric: took 9.100505ms to wait for k8s-apps to be running ...
	I0914 22:52:27.372854   45407 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:52:27.373084   45407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:27.390112   45407 system_svc.go:56] duration metric: took 17.249761ms WaitForService to wait for kubelet.
	I0914 22:52:27.390137   45407 kubeadm.go:581] duration metric: took 4m23.585167656s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:52:27.390174   45407 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:52:27.393099   45407 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:52:27.393123   45407 node_conditions.go:123] node cpu capacity is 2
	I0914 22:52:27.393133   45407 node_conditions.go:105] duration metric: took 2.953927ms to run NodePressure ...
	I0914 22:52:27.393142   45407 start.go:228] waiting for startup goroutines ...
	I0914 22:52:27.393148   45407 start.go:233] waiting for cluster config update ...
	I0914 22:52:27.393156   45407 start.go:242] writing updated cluster config ...
	I0914 22:52:27.393379   45407 ssh_runner.go:195] Run: rm -f paused
	I0914 22:52:27.441228   45407 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:52:27.442889   45407 out.go:177] * Done! kubectl is now configured to use "no-preload-344363" cluster and "default" namespace by default
	I0914 22:52:26.569354   46713 out.go:204]   - Booting up control plane ...
	I0914 22:52:26.569484   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:52:26.582407   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:52:26.589858   46713 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:52:26.591607   46713 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:52:26.596764   46713 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:52:37.101083   46713 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503887 seconds
	I0914 22:52:37.101244   46713 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:52:37.116094   46713 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:52:37.633994   46713 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:52:37.634186   46713 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-930717 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 22:52:38.144071   46713 kubeadm.go:322] [bootstrap-token] Using token: jnf2g9.h0rslaob8wj902ym
	I0914 22:52:38.145543   46713 out.go:204]   - Configuring RBAC rules ...
	I0914 22:52:38.145661   46713 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:52:38.153514   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:52:38.159575   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:52:38.164167   46713 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:52:38.167903   46713 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:52:38.241317   46713 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:52:38.572283   46713 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:52:38.572309   46713 kubeadm.go:322] 
	I0914 22:52:38.572399   46713 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:52:38.572410   46713 kubeadm.go:322] 
	I0914 22:52:38.572526   46713 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:52:38.572547   46713 kubeadm.go:322] 
	I0914 22:52:38.572581   46713 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:52:38.572669   46713 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:52:38.572762   46713 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:52:38.572775   46713 kubeadm.go:322] 
	I0914 22:52:38.572836   46713 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:52:38.572926   46713 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:52:38.573012   46713 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:52:38.573020   46713 kubeadm.go:322] 
	I0914 22:52:38.573089   46713 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0914 22:52:38.573152   46713 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:52:38.573159   46713 kubeadm.go:322] 
	I0914 22:52:38.573222   46713 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573313   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 \
	I0914 22:52:38.573336   46713 kubeadm.go:322]     --control-plane 	  
	I0914 22:52:38.573343   46713 kubeadm.go:322] 
	I0914 22:52:38.573406   46713 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:52:38.573414   46713 kubeadm.go:322] 
	I0914 22:52:38.573527   46713 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jnf2g9.h0rslaob8wj902ym \
	I0914 22:52:38.573687   46713 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7ceaa7d5345331fb33f5811af789dfa951a88440750b9d5448c3fd1d19f82e27 
	I0914 22:52:38.574219   46713 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:52:38.574248   46713 cni.go:84] Creating CNI manager for ""
	I0914 22:52:38.574261   46713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 22:52:38.575900   46713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 22:52:38.577300   46713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 22:52:38.587120   46713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0914 22:52:38.610197   46713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:52:38.610265   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.610267   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=old-k8s-version-930717 minikube.k8s.io/updated_at=2023_09_14T22_52_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.858082   46713 ops.go:34] apiserver oom_adj: -16
	I0914 22:52:38.858297   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:38.960045   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:39.549581   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.049788   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:40.549998   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.049043   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:41.549875   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.049596   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:42.549039   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.049563   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:43.549663   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.049534   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:44.549938   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.049227   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:45.549171   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.049628   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:46.550019   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.049857   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:47.549272   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.049648   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:48.549709   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.049770   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:49.550050   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.048948   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:50.549154   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.049695   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:51.549811   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.049813   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:52.549858   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.049505   46713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:52:53.149056   46713 kubeadm.go:1081] duration metric: took 14.538858246s to wait for elevateKubeSystemPrivileges.
	I0914 22:52:53.149093   46713 kubeadm.go:406] StartCluster complete in 5m40.2118148s
	I0914 22:52:53.149114   46713 settings.go:142] acquiring lock: {Name:mkfc5a6528df0a16ee386b9556edc7971a9e4692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.149200   46713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:52:53.150928   46713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-6287/kubeconfig: {Name:mk47d568971d904bb9487644d32abca18251aab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:53.151157   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:52:53.151287   46713 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:52:53.151382   46713 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151391   46713 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-930717"
	I0914 22:52:53.151405   46713 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-930717"
	I0914 22:52:53.151411   46713 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-930717"
	W0914 22:52:53.151413   46713 addons.go:240] addon storage-provisioner should already be in state true
	I0914 22:52:53.151419   46713 config.go:182] Loaded profile config "old-k8s-version-930717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:52:53.151423   46713 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-930717"
	W0914 22:52:53.151433   46713 addons.go:240] addon metrics-server should already be in state true
	I0914 22:52:53.151479   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151412   46713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-930717"
	I0914 22:52:53.151484   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.151796   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151820   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.151958   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.151873   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.152044   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.170764   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0914 22:52:53.170912   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0914 22:52:53.171012   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0914 22:52:53.171235   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171345   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171378   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.171850   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171870   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171970   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.171991   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.171999   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.172019   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.172232   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172517   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172572   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.172759   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.172910   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.172987   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.173110   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.173146   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.189453   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0914 22:52:53.189789   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.190229   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.190251   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.190646   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.190822   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.192990   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.195176   46713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 22:52:53.194738   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0914 22:52:53.196779   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:52:53.196797   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:52:53.196813   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.195752   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.197457   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.197476   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.197849   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.198026   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.200022   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.200176   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.201917   46713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:52:53.200654   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.200795   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.203540   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.203632   46713 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.203652   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.203671   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.203844   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.204002   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.206460   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.206968   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.206998   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.207153   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.207303   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.207524   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.207672   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.253944   46713 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-930717"
	W0914 22:52:53.253968   46713 addons.go:240] addon default-storageclass should already be in state true
	I0914 22:52:53.253990   46713 host.go:66] Checking if "old-k8s-version-930717" exists ...
	I0914 22:52:53.254330   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.254377   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0914 22:52:53.270047   46713 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-930717" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0914 22:52:53.270077   46713 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0914 22:52:53.270099   46713 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:53.271730   46713 out.go:177] * Verifying Kubernetes components...
	I0914 22:52:53.270422   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0914 22:52:53.273255   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:52:53.273653   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.274180   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.274206   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.274559   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.275121   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:52:53.275165   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:52:53.291000   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0914 22:52:53.291405   46713 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:52:53.291906   46713 main.go:141] libmachine: Using API Version  1
	I0914 22:52:53.291927   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:52:53.292312   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:52:53.292529   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetState
	I0914 22:52:53.294366   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .DriverName
	I0914 22:52:53.294583   46713 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.294598   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:52:53.294611   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHHostname
	I0914 22:52:53.297265   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297771   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:a5:28", ip: ""} in network mk-old-k8s-version-930717: {Iface:virbr4 ExpiryTime:2023-09-14 23:46:54 +0000 UTC Type:0 Mac:52:54:00:12:a5:28 Iaid: IPaddr:192.168.72.70 Prefix:24 Hostname:old-k8s-version-930717 Clientid:01:52:54:00:12:a5:28}
	I0914 22:52:53.297809   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | domain old-k8s-version-930717 has defined IP address 192.168.72.70 and MAC address 52:54:00:12:a5:28 in network mk-old-k8s-version-930717
	I0914 22:52:53.297895   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHPort
	I0914 22:52:53.298057   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHKeyPath
	I0914 22:52:53.298236   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .GetSSHUsername
	I0914 22:52:53.298383   46713 sshutil.go:53] new ssh client: &{IP:192.168.72.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/old-k8s-version-930717/id_rsa Username:docker}
	I0914 22:52:53.344235   46713 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.344478   46713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:52:53.350176   46713 node_ready.go:49] node "old-k8s-version-930717" has status "Ready":"True"
	I0914 22:52:53.350196   46713 node_ready.go:38] duration metric: took 5.934445ms waiting for node "old-k8s-version-930717" to be "Ready" ...
	I0914 22:52:53.350204   46713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:52:53.359263   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:52:53.359296   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 22:52:53.367792   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:52:53.384576   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:52:53.397687   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:52:53.397703   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:52:53.439813   46713 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:53.439843   46713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:52:53.473431   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:52:53.499877   46713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:52:54.233171   46713 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0914 22:52:54.365130   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365156   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365178   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365198   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365438   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365465   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365476   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365481   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.365486   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.365546   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.365556   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.365565   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.365574   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367064   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367090   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367068   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367489   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367513   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367526   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.367540   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.367489   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.367757   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.367810   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.367852   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.830646   46713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330728839s)
	I0914 22:52:54.830698   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.830711   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831036   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831059   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831065   46713 main.go:141] libmachine: (old-k8s-version-930717) DBG | Closing plugin on server side
	I0914 22:52:54.831080   46713 main.go:141] libmachine: Making call to close driver server
	I0914 22:52:54.831096   46713 main.go:141] libmachine: (old-k8s-version-930717) Calling .Close
	I0914 22:52:54.831312   46713 main.go:141] libmachine: Successfully made call to close driver server
	I0914 22:52:54.831328   46713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 22:52:54.831338   46713 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-930717"
	I0914 22:52:54.832992   46713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 22:52:54.834828   46713 addons.go:502] enable addons completed in 1.683549699s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 22:52:55.415046   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:57.878279   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:52:59.879299   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:01.879559   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:03.880088   46713 pod_ready.go:102] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:05.880334   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.880355   46713 pod_ready.go:81] duration metric: took 12.512536425s waiting for pod "coredns-5644d7b6d9-5dhgr" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.880364   46713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885370   46713 pod_ready.go:92] pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:05.885386   46713 pod_ready.go:81] duration metric: took 5.016722ms waiting for pod "coredns-5644d7b6d9-zh279" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:05.885394   46713 pod_ready.go:38] duration metric: took 12.535181673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:53:05.885413   46713 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:53:05.885466   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:53:05.901504   46713 api_server.go:72] duration metric: took 12.631380008s to wait for apiserver process to appear ...
	I0914 22:53:05.901522   46713 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:53:05.901534   46713 api_server.go:253] Checking apiserver healthz at https://192.168.72.70:8443/healthz ...
	I0914 22:53:05.907706   46713 api_server.go:279] https://192.168.72.70:8443/healthz returned 200:
	ok
	I0914 22:53:05.908445   46713 api_server.go:141] control plane version: v1.16.0
	I0914 22:53:05.908466   46713 api_server.go:131] duration metric: took 6.937898ms to wait for apiserver health ...
	I0914 22:53:05.908475   46713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:53:05.911983   46713 system_pods.go:59] 5 kube-system pods found
	I0914 22:53:05.912001   46713 system_pods.go:61] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.912008   46713 system_pods.go:61] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.912013   46713 system_pods.go:61] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.912022   46713 system_pods.go:61] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.912033   46713 system_pods.go:61] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.912043   46713 system_pods.go:74] duration metric: took 3.562804ms to wait for pod list to return data ...
	I0914 22:53:05.912054   46713 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:53:05.914248   46713 default_sa.go:45] found service account: "default"
	I0914 22:53:05.914267   46713 default_sa.go:55] duration metric: took 2.203622ms for default service account to be created ...
	I0914 22:53:05.914276   46713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:53:05.917292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:05.917310   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:05.917315   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:05.917319   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:05.917325   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:05.917331   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:05.917343   46713 retry.go:31] will retry after 277.910308ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.201147   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.201170   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.201175   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.201179   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.201185   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.201191   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.201205   46713 retry.go:31] will retry after 262.96693ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.470372   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.470410   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.470418   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.470425   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.470435   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.470446   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.470481   46713 retry.go:31] will retry after 486.428451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:06.961666   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:06.961693   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:06.961700   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:06.961706   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:06.961716   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:06.961724   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:06.961740   46713 retry.go:31] will retry after 524.467148ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:07.491292   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:07.491315   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:07.491321   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:07.491325   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:07.491331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:07.491337   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:07.491370   46713 retry.go:31] will retry after 567.308028ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.063587   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.063612   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.063618   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.063622   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.063629   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.063635   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.063649   46713 retry.go:31] will retry after 723.150919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:08.791530   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:08.791561   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:08.791571   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:08.791578   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:08.791588   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:08.791597   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:08.791616   46713 retry.go:31] will retry after 1.173741151s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:09.971866   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:09.971895   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:09.971903   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:09.971909   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:09.971919   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:09.971928   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:09.971946   46713 retry.go:31] will retry after 1.046713916s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:11.024191   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:11.024220   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:11.024226   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:11.024231   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:11.024238   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:11.024244   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:11.024260   46713 retry.go:31] will retry after 1.531910243s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:12.562517   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:12.562555   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:12.562564   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:12.562573   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:12.562584   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:12.562594   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:12.562612   46713 retry.go:31] will retry after 2.000243773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:14.570247   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:14.570284   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:14.570294   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:14.570303   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:14.570320   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:14.570329   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:14.570346   46713 retry.go:31] will retry after 2.095330784s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:16.670345   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:16.670372   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:16.670377   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:16.670382   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:16.670394   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:16.670401   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:16.670416   46713 retry.go:31] will retry after 2.811644755s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:19.488311   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:19.488339   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:19.488344   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:19.488348   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:19.488354   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:19.488362   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:19.488380   46713 retry.go:31] will retry after 3.274452692s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:22.768417   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:22.768446   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:22.768454   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:22.768461   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:22.768471   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:22.768481   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:22.768499   46713 retry.go:31] will retry after 5.52037196s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:28.294932   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:28.294958   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:28.294964   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:28.294967   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:28.294975   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:28.294980   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:28.294994   46713 retry.go:31] will retry after 4.305647383s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:32.605867   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:32.605894   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:32.605900   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:32.605903   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:32.605910   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:32.605915   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:32.605929   46713 retry.go:31] will retry after 8.214918081s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:40.825284   46713 system_pods.go:86] 5 kube-system pods found
	I0914 22:53:40.825314   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:40.825319   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:40.825324   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:40.825331   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:40.825336   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:40.825352   46713 retry.go:31] will retry after 10.5220598s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:53:51.353809   46713 system_pods.go:86] 7 kube-system pods found
	I0914 22:53:51.353844   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:53:51.353851   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:53:51.353856   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Pending
	I0914 22:53:51.353862   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:53:51.353868   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Pending
	I0914 22:53:51.353878   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:53:51.353887   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:53:51.353907   46713 retry.go:31] will retry after 10.482387504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0914 22:54:01.842876   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:01.842900   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:01.842905   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:01.842909   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Pending
	I0914 22:54:01.842914   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:01.842918   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Pending
	I0914 22:54:01.842921   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:01.842925   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:01.842931   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:01.842937   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:01.842950   46713 retry.go:31] will retry after 14.535469931s: missing components: etcd, kube-controller-manager
	I0914 22:54:16.384703   46713 system_pods.go:86] 9 kube-system pods found
	I0914 22:54:16.384732   46713 system_pods.go:89] "coredns-5644d7b6d9-5dhgr" [009c9ce3-6e97-44a7-89f5-7a4566be5b1b] Running
	I0914 22:54:16.384738   46713 system_pods.go:89] "coredns-5644d7b6d9-zh279" [06e39db3-fd3a-4919-aa49-4aa8b21f59b5] Running
	I0914 22:54:16.384742   46713 system_pods.go:89] "etcd-old-k8s-version-930717" [54bc1941-682e-4a7b-88d0-434f3436afd0] Running
	I0914 22:54:16.384747   46713 system_pods.go:89] "kube-apiserver-old-k8s-version-930717" [0a1b949c-46c9-42da-85b8-8a42aace12ae] Running
	I0914 22:54:16.384751   46713 system_pods.go:89] "kube-controller-manager-old-k8s-version-930717" [2662214d-e986-4274-bf50-6f3c156da63b] Running
	I0914 22:54:16.384754   46713 system_pods.go:89] "kube-proxy-78njr" [0704238a-5fb8-46d4-912c-4bbf7f419a12] Running
	I0914 22:54:16.384758   46713 system_pods.go:89] "kube-scheduler-old-k8s-version-930717" [195d9923-1089-4bfb-8729-6ad7e066af97] Running
	I0914 22:54:16.384766   46713 system_pods.go:89] "metrics-server-74d5856cc6-qjxtc" [995d5d99-10f4-4928-b384-79e5b03b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 22:54:16.384773   46713 system_pods.go:89] "storage-provisioner" [960b6941-9167-4b87-b0f8-4fd4ad1227aa] Running
	I0914 22:54:16.384782   46713 system_pods.go:126] duration metric: took 1m10.470499333s to wait for k8s-apps to be running ...
	I0914 22:54:16.384791   46713 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:54:16.384849   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:54:16.409329   46713 system_svc.go:56] duration metric: took 24.530447ms WaitForService to wait for kubelet.
	I0914 22:54:16.409359   46713 kubeadm.go:581] duration metric: took 1m23.139238057s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:54:16.409385   46713 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:54:16.412461   46713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 22:54:16.412490   46713 node_conditions.go:123] node cpu capacity is 2
	I0914 22:54:16.412505   46713 node_conditions.go:105] duration metric: took 3.107771ms to run NodePressure ...
	I0914 22:54:16.412519   46713 start.go:228] waiting for startup goroutines ...
	I0914 22:54:16.412529   46713 start.go:233] waiting for cluster config update ...
	I0914 22:54:16.412546   46713 start.go:242] writing updated cluster config ...
	I0914 22:54:16.412870   46713 ssh_runner.go:195] Run: rm -f paused
	I0914 22:54:16.460181   46713 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I0914 22:54:16.461844   46713 out.go:177] 
	W0914 22:54:16.463221   46713 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0914 22:54:16.464486   46713 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0914 22:54:16.465912   46713 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-930717" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-09-14 22:46:53 UTC, ends at Thu 2023-09-14 23:06:56 UTC. --
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.941141177Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8ea5023-6b10-42b7-a60e-07e313793cc5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.941262910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8ea5023-6b10-42b7-a60e-07e313793cc5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.941546285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8ea5023-6b10-42b7-a60e-07e313793cc5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.942185023Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=071142bb-85a1-4635-87d9-b9eb98110127 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.942303204Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1694731976047801787,StartedAt:1694731976091108443,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/960b6941-9167-4b87-b0f8-4fd4ad1227aa/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/960b6941-9167-4b87-b0f8-4fd4ad1227aa/containers/storage-provisioner/53ef96c1,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/960b6941-9167-4b87-b0f8-4fd4ad1227aa/volumes/kubernetes.io~secret/storage-provisioner-token-nh5gj,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_960b6941-9167-4b87-b0f8-4fd4ad1227aa/storage-pr
ovisioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=071142bb-85a1-4635-87d9-b9eb98110127 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.943057339Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=89cca8d1-3764-4f86-b585-6aba67fa2f5c name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.943158430Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1694731975751267176,StartedAt:1694731975791293324,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-proxy:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0704238a-5fb8-46d4-912c-4bbf7f419a12/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0704238a-5fb8-46d4-912c-4bbf7f419a12/containers/kube-proxy/a879b6f8,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kubelet/pods/0704238a-5fb8-46d4-912c-4bbf7f419a12/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/servi
ceaccount,HostPath:/var/lib/kubelet/pods/0704238a-5fb8-46d4-912c-4bbf7f419a12/volumes/kubernetes.io~secret/kube-proxy-token-24lsg,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-proxy-78njr_0704238a-5fb8-46d4-912c-4bbf7f419a12/kube-proxy/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=89cca8d1-3764-4f86-b585-6aba67fa2f5c name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.943710442Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=d7c5b506-94a0-4497-b7ee-bbd626631149 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.943816979Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1694731975419728583,StartedAt:1694731975468659450,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/coredns:1.6.2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/009c9ce3-6e97-44a7-89f5-7a4566be5b1b/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/009c9ce3-6e97-44a7-89f5-7a4566be5b1b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/009c9ce3-6e97-44a7-89f5-7a4566be5b1b/containers/coredns/94716cb7,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/009c9ce
3-6e97-44a7-89f5-7a4566be5b1b/volumes/kubernetes.io~secret/coredns-token-xg4qq,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-5644d7b6d9-5dhgr_009c9ce3-6e97-44a7-89f5-7a4566be5b1b/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=d7c5b506-94a0-4497-b7ee-bbd626631149 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.944283030Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=1b7e7ba4-8561-484f-aa18-501e10a2e754 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.944353203Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1694731975306255610,StartedAt:1694731975385258751,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/coredns:1.6.2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/06e39db3-fd3a-4919-aa49-4aa8b21f59b5/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/06e39db3-fd3a-4919-aa49-4aa8b21f59b5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/06e39db3-fd3a-4919-aa49-4aa8b21f59b5/containers/coredns/4b14d2b0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/06e39db
3-fd3a-4919-aa49-4aa8b21f59b5/volumes/kubernetes.io~secret/coredns-token-xg4qq,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-5644d7b6d9-zh279_06e39db3-fd3a-4919-aa49-4aa8b21f59b5/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=1b7e7ba4-8561-484f-aa18-501e10a2e754 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.945095886Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=a2cac99a-456d-470f-8abd-f44d84a56d64 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.945167007Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1694731949261407957,StartedAt:1694731949314741128,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/etcd:3.3.15-0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/381b3b581ff73227b3cba8e1c96bc6c0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/381b3b581ff73227b3cba8e1c96bc6c0/containers/etcd/e38ecf72,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-old-k8s-version-930717_381b3b581ff73227b3cba8e1c96bc6c0/etcd/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=a2cac99a-456d-470f-8abd-f44d84a56d64 name=/runtime.v1alpha2.RuntimeService/ContainerSt
atus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.945733697Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=5434f184-0c05-410d-a16a-0336aacb2c59 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.945818188Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1694731948015261337,StartedAt:1694731948057995786,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-scheduler:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b3d303074fe0ca1d42a8bd9ed248df09/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b3d303074fe0ca1d42a8bd9ed248df09/containers/kube-scheduler/d5403db2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-old-k8s-version-930717_b3d303074fe0ca1d42a8bd9ed248df09/kube-scheduler/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=5434f184-0c05-410d-a16a-0336aacb2c59 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.946269279Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=102ff6e3-4741-48d9-b309-e44df7c001dc name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.946355047Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1694731947643587597,StartedAt:1694731947689814806,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-controller-manager:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7376ddb4f190a0ded9394063437bcb4e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7376ddb4f190a0ded9394063437bcb4e/containers/kube-controller-manager/b229dd53,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVA
TE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-930717_7376ddb4f190a0ded9394063437bcb4e/kube-controller-manager/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=102ff6e3-4741-48d9-b309-e44df7c001dc name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.947088512Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=e6dd66ba-762c-4869-8c08-ddc0f11e47a5 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.947739452Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1694731947497708364,StartedAt:1694731947551924731,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-apiserver:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ce1dcffe2ddeeabea9e697b171701efa/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ce1dcffe2ddeeabea9e697b171701efa/containers/kube-apiserver/31bfddf8,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-old-k8s-version-930717_ce1dcff
e2ddeeabea9e697b171701efa/kube-apiserver/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e6dd66ba-762c-4869-8c08-ddc0f11e47a5 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.949405786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c5f9e04d-4055-4221-a88a-da6d486bb445 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.949449280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c5f9e04d-4055-4221-a88a-da6d486bb445 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.949742103Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c5f9e04d-4055-4221-a88a-da6d486bb445 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.976041167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e3cf3ed4-3368-4cdc-bd45-d8d6baa25adb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.976129563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e3cf3ed4-3368-4cdc-bd45-d8d6baa25adb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 14 23:06:56 old-k8s-version-930717 crio[713]: time="2023-09-14 23:06:56.976329562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0,PodSandboxId:c51a2bdff31e0f17aa7b428ddd73db02d7105abd1444f3764b2325137798d466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694731975949530310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960b6941-9167-4b87-b0f8-4fd4ad1227aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8beea06e,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237,PodSandboxId:9ab70cb9a88a03e4f06ade31d1fdbbeb3acd5fd1dfbbf4210d7f2337538b610b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694731975651132524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-78njr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0704238a-5fb8-46d4-912c-4bbf7f419a12,},Annotations:map[string]string{io.kubernetes.container.hash: 389bb6db,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e,PodSandboxId:b96fab0054704b364f4616008806396279c466085cd7ccfe39a2e97e53a3e661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975311036394,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5dhgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 009c9ce3-6e97-44a7-89f5-7a4566be5b1b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811,PodSandboxId:f213c4a0a6e67ec16d11e63d6f7cc0b7df78560e550db790732008c076060131,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694731975186696250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zh279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e39db3-fd3a-4919-aa49-4aa8b21f59b5,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 6f13e958,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974,PodSandboxId:90de897d887d779dcb58a15ef8c81f9e220f945ccecd94160bfccaef7fe63034,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694731949180367109,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 381b3b581ff73227b3cba8e1c96bc6c0,},Annotations:map[string]string{io.kubernetes.container.hash: a0b393aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c,PodSandboxId:0e249a91091e377f4276bc3f0e1b8e80e44eb22754f439dc1e8f91e13a3ca86b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694731947949957130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.k
ubernetes.pod.name: kube-scheduler-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec,PodSandboxId:54731262bac6ebd0672e15533c1adce8930db39d18f149a49ec3555330187a6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694731947529690463,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290,PodSandboxId:4ae89dfeddff520ce26fce9b3f1f65100ed9835a4bd3dac2700d3e0e63c54d10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694731947409560391,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-930717,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1dcffe2ddeeabea9e697b171701efa,},Annotations:map[string]string{io.kubernetes.container.hash: 747e8edc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e3cf3ed4-3368-4cdc-bd45-d8d6baa25adb name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	505f9a835ea06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   c51a2bdff31e0
	9d3dabddbe65e       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   14 minutes ago      Running             kube-proxy                0                   9ab70cb9a88a0
	89d3f1675bb2d       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   14 minutes ago      Running             coredns                   0                   b96fab0054704
	3666121471cff       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   14 minutes ago      Running             coredns                   0                   f213c4a0a6e67
	ea4aa381d0367       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   14 minutes ago      Running             etcd                      0                   90de897d887d7
	f3780dded8c30       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   14 minutes ago      Running             kube-scheduler            0                   0e249a91091e3
	220096e104c5c       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   14 minutes ago      Running             kube-controller-manager   0                   54731262bac6e
	8df143c7256d3       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   14 minutes ago      Running             kube-apiserver            0                   4ae89dfeddff5
	
	* 
	* ==> coredns [3666121471cff39c83fbb56500e9e18ea9f3dc20e630da103db6645093281811] <==
	* .:53
	2023-09-14T22:52:55.563Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-09-14T22:52:55.563Z [INFO] CoreDNS-1.6.2
	2023-09-14T22:52:55.563Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-09-14T22:52:55.571Z [INFO] 127.0.0.1:40315 - 13800 "HINFO IN 1364800437933321559.4221059419903132037. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007273829s
	
	* 
	* ==> coredns [89d3f1675bb2d4c6cb9de4d0228c74e04342e04d4a98bb8df36a2de5bba0c01e] <==
	* .:53
	2023-09-14T22:52:55.602Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-09-14T22:52:55.602Z [INFO] CoreDNS-1.6.2
	2023-09-14T22:52:55.602Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-09-14T22:52:55.617Z [INFO] 127.0.0.1:56844 - 3262 "HINFO IN 9187367119679096330.7872013698849296893. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014555503s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-930717
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-930717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=old-k8s-version-930717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_52_38_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:52:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 23:06:34 +0000   Thu, 14 Sep 2023 22:52:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 23:06:34 +0000   Thu, 14 Sep 2023 22:52:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 23:06:34 +0000   Thu, 14 Sep 2023 22:52:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 23:06:34 +0000   Thu, 14 Sep 2023 22:52:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.70
	  Hostname:    old-k8s-version-930717
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 820a4887f0bd47b9a114e5e546ca5e2b
	 System UUID:                820a4887-f0bd-47b9-a114-e5e546ca5e2b
	 Boot ID:                    4e318042-261b-4123-9603-549b1ecafd50
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-5dhgr                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                coredns-5644d7b6d9-zh279                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                etcd-old-k8s-version-930717                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-apiserver-old-k8s-version-930717             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-controller-manager-old-k8s-version-930717    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-proxy-78njr                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                kube-scheduler-old-k8s-version-930717             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                metrics-server-74d5856cc6-qjxtc                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             340Mi (16%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet, old-k8s-version-930717     Node old-k8s-version-930717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet, old-k8s-version-930717     Node old-k8s-version-930717 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet, old-k8s-version-930717     Node old-k8s-version-930717 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kube-proxy, old-k8s-version-930717  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep14 22:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.087007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.431604] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.750359] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133117] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.363433] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep14 22:47] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.132272] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.155114] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.127414] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.253365] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +20.232780] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[  +0.435946] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.414608] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.924168] kauditd_printk_skb: 2 callbacks suppressed
	[Sep14 22:52] systemd-fstab-generator[3085]: Ignoring "noauto" for root device
	[  +0.751000] kauditd_printk_skb: 6 callbacks suppressed
	[Sep14 22:53] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [ea4aa381d03673e11c44e31dab2d46afb16d65eff5e06a29fca893443ea4a974] <==
	* 2023-09-14 22:52:29.340729 I | raft: newRaft 3268eeb9c599aeb4 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-09-14 22:52:29.340744 I | raft: 3268eeb9c599aeb4 became follower at term 1
	2023-09-14 22:52:29.348380 W | auth: simple token is not cryptographically signed
	2023-09-14 22:52:29.352705 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-14 22:52:29.353825 I | etcdserver: 3268eeb9c599aeb4 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-14 22:52:29.354527 I | etcdserver/membership: added member 3268eeb9c599aeb4 [https://192.168.72.70:2380] to cluster 96a33227e2b23009
	2023-09-14 22:52:29.355127 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-14 22:52:29.355251 I | embed: listening for metrics on http://192.168.72.70:2381
	2023-09-14 22:52:29.355330 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-14 22:52:30.341261 I | raft: 3268eeb9c599aeb4 is starting a new election at term 1
	2023-09-14 22:52:30.341374 I | raft: 3268eeb9c599aeb4 became candidate at term 2
	2023-09-14 22:52:30.341392 I | raft: 3268eeb9c599aeb4 received MsgVoteResp from 3268eeb9c599aeb4 at term 2
	2023-09-14 22:52:30.341401 I | raft: 3268eeb9c599aeb4 became leader at term 2
	2023-09-14 22:52:30.341406 I | raft: raft.node: 3268eeb9c599aeb4 elected leader 3268eeb9c599aeb4 at term 2
	2023-09-14 22:52:30.341681 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-14 22:52:30.343264 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-14 22:52:30.343673 I | etcdserver: published {Name:old-k8s-version-930717 ClientURLs:[https://192.168.72.70:2379]} to cluster 96a33227e2b23009
	2023-09-14 22:52:30.343740 I | embed: ready to serve client requests
	2023-09-14 22:52:30.343939 I | embed: ready to serve client requests
	2023-09-14 22:52:30.345112 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-14 22:52:30.346347 I | embed: serving client requests on 192.168.72.70:2379
	2023-09-14 22:52:30.346453 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-14 22:52:54.707905 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-zh279\" " with result "range_response_count:1 size:1367" took too long (133.321798ms) to execute
	2023-09-14 23:02:30.370007 I | mvcc: store.index: compact 667
	2023-09-14 23:02:30.371870 I | mvcc: finished scheduled compaction at 667 (took 1.330432ms)
	
	* 
	* ==> kernel <==
	*  23:06:57 up 20 min,  0 users,  load average: 0.03, 0.09, 0.14
	Linux old-k8s-version-930717 5.10.57 #1 SMP Wed Sep 13 22:05:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8df143c7256d33b4304ae34bfce023ff0b238fa4d62ea62cbaf7f7318b8d7290] <==
	* I0914 22:58:34.514411       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0914 22:58:34.514591       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 22:58:34.514651       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:58:34.514659       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:00:34.515123       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0914 23:00:34.515251       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 23:00:34.515330       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:00:34.515341       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:02:34.516670       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0914 23:02:34.516802       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 23:02:34.516881       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:02:34.516891       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:03:34.517191       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0914 23:03:34.517544       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 23:03:34.517638       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:03:34.517668       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 23:05:34.518153       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0914 23:05:34.518285       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 23:05:34.518356       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 23:05:34.518364       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [220096e104c5cf3b6d81f4fe144082d3ef7b78c9645c1131d56ecb006d2af0ec] <==
	* E0914 23:00:27.135105       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:00:53.481769       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:00:57.386754       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:01:25.484407       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:01:27.638695       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:01:57.486795       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:01:57.891070       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0914 23:02:28.142940       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:02:29.489196       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:02:58.394767       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:03:01.490877       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:03:28.646871       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:03:33.492561       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:03:58.899341       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:04:05.494963       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:04:29.151275       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:04:37.496714       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:04:59.403643       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:05:09.499007       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:05:29.655773       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:05:41.501313       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:05:59.908694       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:06:13.503417       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 23:06:30.160803       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0914 23:06:45.505845       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [9d3dabddbe65e0f475739d69f3d6d4d2dcb33f40ab49a8d6a95360fdb180b237] <==
	* W0914 22:52:55.944294       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0914 22:52:55.961939       1 node.go:135] Successfully retrieved node IP: 192.168.72.70
	I0914 22:52:55.962054       1 server_others.go:149] Using iptables Proxier.
	I0914 22:52:55.966347       1 server.go:529] Version: v1.16.0
	I0914 22:52:55.970906       1 config.go:131] Starting endpoints config controller
	I0914 22:52:55.973917       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0914 22:52:55.976561       1 config.go:313] Starting service config controller
	I0914 22:52:55.976844       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0914 22:52:56.074283       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0914 22:52:56.077936       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f3780dded8c30f4a018c7ecbca812f449e03b7796539700da11f98a500e4230c] <==
	* I0914 22:52:33.539241       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0914 22:52:33.599607       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:52:33.599972       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:52:33.600184       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 22:52:33.600422       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:52:33.602591       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:33.602704       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:52:33.602786       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:52:33.602840       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:33.604853       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 22:52:33.605163       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:52:33.606967       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 22:52:34.600891       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:52:34.604276       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:52:34.607721       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 22:52:34.608990       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:52:34.609904       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:34.611693       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:52:34.612793       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:52:34.615291       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 22:52:34.615579       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 22:52:34.616589       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:52:34.617332       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 22:52:53.256657       1 factory.go:585] pod is already present in the activeQ
	E0914 22:52:53.308005       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 22:46:53 UTC, ends at Thu 2023-09-14 23:06:57 UTC. --
	Sep 14 23:02:25 old-k8s-version-930717 kubelet[3091]: E0914 23:02:25.274891    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:02:26 old-k8s-version-930717 kubelet[3091]: E0914 23:02:26.334414    3091 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Sep 14 23:02:38 old-k8s-version-930717 kubelet[3091]: E0914 23:02:38.275535    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:02:51 old-k8s-version-930717 kubelet[3091]: E0914 23:02:51.275746    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:03:06 old-k8s-version-930717 kubelet[3091]: E0914 23:03:06.275323    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:03:20 old-k8s-version-930717 kubelet[3091]: E0914 23:03:20.275405    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:03:33 old-k8s-version-930717 kubelet[3091]: E0914 23:03:33.274855    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:03:44 old-k8s-version-930717 kubelet[3091]: E0914 23:03:44.298271    3091 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 14 23:03:44 old-k8s-version-930717 kubelet[3091]: E0914 23:03:44.298330    3091 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 14 23:03:44 old-k8s-version-930717 kubelet[3091]: E0914 23:03:44.298375    3091 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 14 23:03:44 old-k8s-version-930717 kubelet[3091]: E0914 23:03:44.298401    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Sep 14 23:03:58 old-k8s-version-930717 kubelet[3091]: E0914 23:03:58.275098    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:04:13 old-k8s-version-930717 kubelet[3091]: E0914 23:04:13.275247    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:04:27 old-k8s-version-930717 kubelet[3091]: E0914 23:04:27.274738    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:04:39 old-k8s-version-930717 kubelet[3091]: E0914 23:04:39.275018    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:04:53 old-k8s-version-930717 kubelet[3091]: E0914 23:04:53.274997    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:05:06 old-k8s-version-930717 kubelet[3091]: E0914 23:05:06.275445    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:05:18 old-k8s-version-930717 kubelet[3091]: E0914 23:05:18.274804    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:05:33 old-k8s-version-930717 kubelet[3091]: E0914 23:05:33.275437    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:05:45 old-k8s-version-930717 kubelet[3091]: E0914 23:05:45.274857    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:06:00 old-k8s-version-930717 kubelet[3091]: E0914 23:06:00.275039    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:06:15 old-k8s-version-930717 kubelet[3091]: E0914 23:06:15.278574    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:06:27 old-k8s-version-930717 kubelet[3091]: E0914 23:06:27.274850    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:06:41 old-k8s-version-930717 kubelet[3091]: E0914 23:06:41.274871    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 23:06:54 old-k8s-version-930717 kubelet[3091]: E0914 23:06:54.274824    3091 pod_workers.go:191] Error syncing pod 995d5d99-10f4-4928-b384-79e5b03b9a2b ("metrics-server-74d5856cc6-qjxtc_kube-system(995d5d99-10f4-4928-b384-79e5b03b9a2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [505f9a835ea06887bb70605f9fd2e84b1596bbd0903dc9975fd554efe69373f0] <==
	* I0914 22:52:56.124552       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:52:56.136628       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:52:56.136687       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:52:56.149522       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:52:56.149983       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-930717_0fb98f8f-e029-479c-8cf4-8ebaed133129!
	I0914 22:52:56.154117       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea805da0-96d9-43f4-897c-c2a3a4575986", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-930717_0fb98f8f-e029-479c-8cf4-8ebaed133129 became leader
	I0914 22:52:56.261052       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-930717_0fb98f8f-e029-479c-8cf4-8ebaed133129!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-930717 -n old-k8s-version-930717
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-930717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-qjxtc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-930717 describe pod metrics-server-74d5856cc6-qjxtc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-930717 describe pod metrics-server-74d5856cc6-qjxtc: exit status 1 (73.21818ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-qjxtc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-930717 describe pod metrics-server-74d5856cc6-qjxtc: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (219.04s)

                                                
                                    

Test pass (227/290)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 26.33
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.28.1/json-events 18.94
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.12
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
19 TestBinaryMirror 0.53
20 TestOffline 134.93
22 TestAddons/Setup 145.66
24 TestAddons/parallel/Registry 15.64
26 TestAddons/parallel/InspektorGadget 11.44
27 TestAddons/parallel/MetricsServer 6.61
28 TestAddons/parallel/HelmTiller 14.62
30 TestAddons/parallel/CSI 81.86
31 TestAddons/parallel/Headlamp 15.45
32 TestAddons/parallel/CloudSpanner 6.23
35 TestAddons/serial/GCPAuth/Namespaces 0.11
37 TestCertOptions 95.73
38 TestCertExpiration 272.4
40 TestForceSystemdFlag 96.43
41 TestForceSystemdEnv 70.84
43 TestKVMDriverInstallOrUpdate 3.91
47 TestErrorSpam/setup 45.8
48 TestErrorSpam/start 0.31
49 TestErrorSpam/status 0.71
50 TestErrorSpam/pause 1.34
51 TestErrorSpam/unpause 1.43
52 TestErrorSpam/stop 2.2
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 62.96
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 32.18
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.27
64 TestFunctional/serial/CacheCmd/cache/add_local 2.09
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
72 TestFunctional/serial/ExtraConfig 34.12
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 1.23
76 TestFunctional/serial/InvalidService 4.49
78 TestFunctional/parallel/ConfigCmd 0.29
79 TestFunctional/parallel/DashboardCmd 15.67
80 TestFunctional/parallel/DryRun 0.25
81 TestFunctional/parallel/InternationalLanguage 0.13
82 TestFunctional/parallel/StatusCmd 1.04
86 TestFunctional/parallel/ServiceCmdConnect 8.68
87 TestFunctional/parallel/AddonsCmd 0.11
88 TestFunctional/parallel/PersistentVolumeClaim 54.37
90 TestFunctional/parallel/SSHCmd 0.52
91 TestFunctional/parallel/CpCmd 0.86
92 TestFunctional/parallel/MySQL 27.61
93 TestFunctional/parallel/FileSync 0.29
94 TestFunctional/parallel/CertSync 1.16
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
102 TestFunctional/parallel/License 0.6
103 TestFunctional/parallel/ServiceCmd/DeployApp 12.21
104 TestFunctional/parallel/Version/short 0.04
105 TestFunctional/parallel/Version/components 0.81
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
110 TestFunctional/parallel/ImageCommands/ImageBuild 6.63
111 TestFunctional/parallel/ImageCommands/Setup 2.03
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.42
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.77
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 13.43
127 TestFunctional/parallel/ServiceCmd/List 0.35
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.62
130 TestFunctional/parallel/ServiceCmd/Format 0.38
131 TestFunctional/parallel/ServiceCmd/URL 0.34
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
133 TestFunctional/parallel/ProfileCmd/profile_list 0.32
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
135 TestFunctional/parallel/MountCmd/any-port 23.5
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.23
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.83
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 6.22
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.22
140 TestFunctional/parallel/MountCmd/specific-port 1.87
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
142 TestFunctional/delete_addon-resizer_images 0.06
143 TestFunctional/delete_my-image_image 0.01
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestIngressAddonLegacy/StartLegacyK8sCluster 112.99
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.42
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
155 TestJSONOutput/start/Command 59.39
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.64
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.57
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 7.09
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.18
183 TestMainNoArgs 0.04
184 TestMinikubeProfile 94.98
187 TestMountStart/serial/StartWithMountFirst 27.27
188 TestMountStart/serial/VerifyMountFirst 0.38
189 TestMountStart/serial/StartWithMountSecond 27.5
190 TestMountStart/serial/VerifyMountSecond 0.37
191 TestMountStart/serial/DeleteFirst 0.85
192 TestMountStart/serial/VerifyMountPostDelete 0.38
193 TestMountStart/serial/Stop 1.09
194 TestMountStart/serial/RestartStopped 22.14
195 TestMountStart/serial/VerifyMountPostStop 0.37
198 TestMultiNode/serial/FreshStart2Nodes 136.37
199 TestMultiNode/serial/DeployApp2Nodes 6.06
201 TestMultiNode/serial/AddNode 43.6
202 TestMultiNode/serial/ProfileList 0.19
203 TestMultiNode/serial/CopyFile 6.8
204 TestMultiNode/serial/StopNode 2.21
205 TestMultiNode/serial/StartAfterStop 29.49
207 TestMultiNode/serial/DeleteNode 1.71
209 TestMultiNode/serial/RestartMultiNode 444.02
210 TestMultiNode/serial/ValidateNameConflict 44.49
217 TestScheduledStopUnix 115.7
223 TestKubernetesUpgrade 199.27
229 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
235 TestPause/serial/Start 110.23
236 TestNoKubernetes/serial/StartWithK8s 105.83
237 TestNoKubernetes/serial/StartWithStopK8s 5.76
239 TestNoKubernetes/serial/Start 31.15
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
241 TestNoKubernetes/serial/ProfileList 4.56
242 TestNoKubernetes/serial/Stop 1.21
243 TestNoKubernetes/serial/StartNoArgs 42.02
251 TestNetworkPlugins/group/false 3.1
255 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
256 TestStoppedBinaryUpgrade/Setup 1.77
259 TestStartStop/group/old-k8s-version/serial/FirstStart 341.4
261 TestStartStop/group/no-preload/serial/FirstStart 104.24
262 TestStartStop/group/no-preload/serial/DeployApp 10.43
263 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
266 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.45
267 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.4
268 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.35
272 TestStartStop/group/embed-certs/serial/FirstStart 60.31
274 TestStartStop/group/no-preload/serial/SecondStart 662.77
275 TestStartStop/group/embed-certs/serial/DeployApp 12.45
276 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
279 TestStartStop/group/old-k8s-version/serial/DeployApp 10.37
280 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.85
281 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 508.35
284 TestStartStop/group/embed-certs/serial/SecondStart 482.66
286 TestStartStop/group/old-k8s-version/serial/SecondStart 536.27
296 TestStartStop/group/newest-cni/serial/FirstStart 60.46
297 TestNetworkPlugins/group/auto/Start 86.7
298 TestStartStop/group/newest-cni/serial/DeployApp 0
299 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.64
300 TestStartStop/group/newest-cni/serial/Stop 3.1
301 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
302 TestStartStop/group/newest-cni/serial/SecondStart 51.3
303 TestNetworkPlugins/group/auto/KubeletFlags 0.26
304 TestNetworkPlugins/group/auto/NetCatPod 15.47
305 TestNetworkPlugins/group/auto/DNS 21.71
306 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
307 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
309 TestStartStop/group/newest-cni/serial/Pause 2.35
310 TestNetworkPlugins/group/kindnet/Start 71.94
311 TestNetworkPlugins/group/calico/Start 115.55
312 TestNetworkPlugins/group/auto/Localhost 0.15
313 TestNetworkPlugins/group/auto/HairPin 0.19
314 TestNetworkPlugins/group/custom-flannel/Start 123.1
315 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
316 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
317 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
319 TestStartStop/group/embed-certs/serial/Pause 3.44
320 TestNetworkPlugins/group/kindnet/DNS 0.25
321 TestNetworkPlugins/group/kindnet/Localhost 0.21
322 TestNetworkPlugins/group/kindnet/HairPin 0.19
323 TestNetworkPlugins/group/enable-default-cni/Start 108.03
324 TestNetworkPlugins/group/flannel/Start 106.81
325 TestNetworkPlugins/group/calico/ControllerPod 5.03
326 TestNetworkPlugins/group/calico/KubeletFlags 0.35
327 TestNetworkPlugins/group/calico/NetCatPod 15.61
328 TestNetworkPlugins/group/calico/DNS 0.24
329 TestNetworkPlugins/group/calico/Localhost 0.16
330 TestNetworkPlugins/group/calico/HairPin 0.26
331 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
332 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.42
333 TestNetworkPlugins/group/custom-flannel/DNS 0.26
334 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
335 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
336 TestNetworkPlugins/group/bridge/Start 103.31
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.46
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
342 TestNetworkPlugins/group/flannel/ControllerPod 5.02
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
344 TestNetworkPlugins/group/flannel/NetCatPod 12.52
345 TestNetworkPlugins/group/flannel/DNS 0.15
346 TestNetworkPlugins/group/flannel/Localhost 0.14
347 TestNetworkPlugins/group/flannel/HairPin 0.14
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
349 TestNetworkPlugins/group/bridge/NetCatPod 12.34
350 TestNetworkPlugins/group/bridge/DNS 0.16
351 TestNetworkPlugins/group/bridge/Localhost 0.13
352 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (26.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-560258 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-560258 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.329829171s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (26.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-560258
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-560258: exit status 85 (53.669361ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-560258 | jenkins | v1.31.2 | 14 Sep 23 21:36 UTC |          |
	|         | -p download-only-560258        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 21:36:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 21:36:17.873166   13497 out.go:296] Setting OutFile to fd 1 ...
	I0914 21:36:17.873424   13497 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:36:17.873434   13497 out.go:309] Setting ErrFile to fd 2...
	I0914 21:36:17.873439   13497 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:36:17.873673   13497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	W0914 21:36:17.873847   13497 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17243-6287/.minikube/config/config.json: open /home/jenkins/minikube-integration/17243-6287/.minikube/config/config.json: no such file or directory
	I0914 21:36:17.874558   13497 out.go:303] Setting JSON to true
	I0914 21:36:17.875426   13497 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1120,"bootTime":1694726258,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 21:36:17.875514   13497 start.go:138] virtualization: kvm guest
	I0914 21:36:17.878096   13497 out.go:97] [download-only-560258] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 21:36:17.879721   13497 out.go:169] MINIKUBE_LOCATION=17243
	W0914 21:36:17.878219   13497 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 21:36:17.878272   13497 notify.go:220] Checking for updates...
	I0914 21:36:17.882665   13497 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 21:36:17.884301   13497 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:36:17.885645   13497 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:36:17.887083   13497 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0914 21:36:17.889829   13497 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 21:36:17.890056   13497 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 21:36:18.010712   13497 out.go:97] Using the kvm2 driver based on user configuration
	I0914 21:36:18.010742   13497 start.go:298] selected driver: kvm2
	I0914 21:36:18.010749   13497 start.go:902] validating driver "kvm2" against <nil>
	I0914 21:36:18.011092   13497 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 21:36:18.011212   13497 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 21:36:18.025614   13497 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 21:36:18.025667   13497 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 21:36:18.026154   13497 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0914 21:36:18.026330   13497 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 21:36:18.026370   13497 cni.go:84] Creating CNI manager for ""
	I0914 21:36:18.026383   13497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 21:36:18.026396   13497 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 21:36:18.026410   13497 start_flags.go:321] config:
	{Name:download-only-560258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-560258 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:36:18.026631   13497 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 21:36:18.028593   13497 out.go:97] Downloading VM boot image ...
	I0914 21:36:18.028643   13497 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/iso/amd64/minikube-v1.31.0-1694625400-17243-amd64.iso
	I0914 21:36:26.859680   13497 out.go:97] Starting control plane node download-only-560258 in cluster download-only-560258
	I0914 21:36:26.859702   13497 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 21:36:26.960672   13497 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0914 21:36:26.960695   13497 cache.go:57] Caching tarball of preloaded images
	I0914 21:36:26.960852   13497 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 21:36:26.962759   13497 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0914 21:36:26.962778   13497 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0914 21:36:27.066677   13497 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0914 21:36:42.354318   13497 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0914 21:36:42.354401   13497 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-560258"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (18.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-560258 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-560258 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (18.940710074s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (18.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-560258
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-560258: exit status 85 (55.153384ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-560258 | jenkins | v1.31.2 | 14 Sep 23 21:36 UTC |          |
	|         | -p download-only-560258        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-560258 | jenkins | v1.31.2 | 14 Sep 23 21:36 UTC |          |
	|         | -p download-only-560258        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 21:36:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 21:36:44.257483   13587 out.go:296] Setting OutFile to fd 1 ...
	I0914 21:36:44.257728   13587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:36:44.257739   13587 out.go:309] Setting ErrFile to fd 2...
	I0914 21:36:44.257743   13587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:36:44.257943   13587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	W0914 21:36:44.258075   13587 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17243-6287/.minikube/config/config.json: open /home/jenkins/minikube-integration/17243-6287/.minikube/config/config.json: no such file or directory
	I0914 21:36:44.258506   13587 out.go:303] Setting JSON to true
	I0914 21:36:44.259258   13587 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1146,"bootTime":1694726258,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 21:36:44.259336   13587 start.go:138] virtualization: kvm guest
	I0914 21:36:44.261318   13587 out.go:97] [download-only-560258] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 21:36:44.262887   13587 out.go:169] MINIKUBE_LOCATION=17243
	I0914 21:36:44.261514   13587 notify.go:220] Checking for updates...
	I0914 21:36:44.265407   13587 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 21:36:44.266852   13587 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:36:44.268401   13587 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:36:44.269722   13587 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0914 21:36:44.272421   13587 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 21:36:44.272829   13587 config.go:182] Loaded profile config "download-only-560258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0914 21:36:44.272872   13587 start.go:810] api.Load failed for download-only-560258: filestore "download-only-560258": Docker machine "download-only-560258" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0914 21:36:44.272944   13587 driver.go:373] Setting default libvirt URI to qemu:///system
	W0914 21:36:44.272976   13587 start.go:810] api.Load failed for download-only-560258: filestore "download-only-560258": Docker machine "download-only-560258" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0914 21:36:44.304794   13587 out.go:97] Using the kvm2 driver based on existing profile
	I0914 21:36:44.304818   13587 start.go:298] selected driver: kvm2
	I0914 21:36:44.304823   13587 start.go:902] validating driver "kvm2" against &{Name:download-only-560258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-560258 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:36:44.305148   13587 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 21:36:44.305214   13587 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17243-6287/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 21:36:44.318632   13587 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 21:36:44.319274   13587 cni.go:84] Creating CNI manager for ""
	I0914 21:36:44.319298   13587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 21:36:44.319309   13587 start_flags.go:321] config:
	{Name:download-only-560258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-560258 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:36:44.319495   13587 iso.go:125] acquiring lock: {Name:mk25020bcca9fa2c06f0f25e6b41c7ee83ae337a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 21:36:44.321300   13587 out.go:97] Starting control plane node download-only-560258 in cluster download-only-560258
	I0914 21:36:44.321318   13587 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 21:36:44.463125   13587 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0914 21:36:44.463160   13587 cache.go:57] Caching tarball of preloaded images
	I0914 21:36:44.463319   13587 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 21:36:44.464961   13587 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0914 21:36:44.464981   13587 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 ...
	I0914 21:36:44.571720   13587 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:7b00bd3467481f38e4a66499519b2cca -> /home/jenkins/minikube-integration/17243-6287/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-560258"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-560258
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-674696 --alsologtostderr --binary-mirror http://127.0.0.1:38139 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-674696" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-674696
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestOffline (134.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-948115 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-948115 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m13.903870505s)
helpers_test.go:175: Cleaning up "offline-crio-948115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-948115
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-948115: (1.03057s)
--- PASS: TestOffline (134.93s)

                                                
                                    
x
+
TestAddons/Setup (145.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-452179 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-452179 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.660281547s)
--- PASS: TestAddons/Setup (145.66s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 23.129962ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5hndr" [48f881de-7dbb-4535-8516-d1f43d100169] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015500682s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4d4zp" [ae93be0b-e4f3-45d1-a641-95ee97d410d2] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.020604981s
addons_test.go:316: (dbg) Run:  kubectl --context addons-452179 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-452179 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-452179 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.81535351s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 ip
2023/09/14 21:39:44 [DEBUG] GET http://192.168.39.45:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.44s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j88p9" [f304d3e0-70aa-4034-9126-031587bb3c85] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.018295502s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-452179
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-452179: (6.42251699s)
--- PASS: TestAddons/parallel/InspektorGadget (11.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 22.933356ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-h6p2j" [e307d1a4-c43b-46bb-b55d-17ba22ab836c] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.017860278s
addons_test.go:391: (dbg) Run:  kubectl --context addons-452179 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-452179 addons disable metrics-server --alsologtostderr -v=1: (1.479560047s)
--- PASS: TestAddons/parallel/MetricsServer (6.61s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.62s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 18.997704ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-m6w86" [221ba0c5-6fb4-46ff-95bc-9ec082635102] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.088817625s
addons_test.go:449: (dbg) Run:  kubectl --context addons-452179 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-452179 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.733268604s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (81.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 9.514915ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-452179 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:540: (dbg) Done: kubectl --context addons-452179 create -f testdata/csi-hostpath-driver/pvc.yaml: (1.081475573s)
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-452179 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9731285d-210c-4a55-8465-aba26cadbfb0] Pending
helpers_test.go:344: "task-pv-pod" [9731285d-210c-4a55-8465-aba26cadbfb0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9731285d-210c-4a55-8465-aba26cadbfb0] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.021109676s
addons_test.go:560: (dbg) Run:  kubectl --context addons-452179 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-452179 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-452179 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-452179 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-452179 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-452179 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-452179 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-452179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-452179 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cc2584de-b08f-4293-b991-942571876895] Pending
helpers_test.go:344: "task-pv-pod-restore" [cc2584de-b08f-4293-b991-942571876895] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cc2584de-b08f-4293-b991-942571876895] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.020843897s
addons_test.go:602: (dbg) Run:  kubectl --context addons-452179 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-452179 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-452179 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-452179 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.707766318s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-452179 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (81.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-452179 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-452179 --alsologtostderr -v=1: (1.423875412s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-4kfkp" [2fc64aa4-6651-4623-8c18-167115ff4449] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-4kfkp" [2fc64aa4-6651-4623-8c18-167115ff4449] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.018352228s
--- PASS: TestAddons/parallel/Headlamp (15.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-mxm2b" [26414a1c-9ba0-428e-85b1-431ba994ee14] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0157538s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-452179
addons_test.go:836: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-452179: (1.191046647s)
--- PASS: TestAddons/parallel/CloudSpanner (6.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-452179 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-452179 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (95.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-927918 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-927918 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m34.230442231s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-927918 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-927918 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-927918 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-927918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-927918
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-927918: (1.014694422s)
--- PASS: TestCertOptions (95.73s)

                                                
                                    
x
+
TestCertExpiration (272.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-631227 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-631227 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m11.530113689s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-631227 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-631227 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (19.89804589s)
helpers_test.go:175: Cleaning up "cert-expiration-631227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-631227
--- PASS: TestCertExpiration (272.40s)

                                                
                                    
x
+
TestForceSystemdFlag (96.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-621738 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-621738 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m35.067061777s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-621738 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-621738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-621738
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-621738: (1.095517463s)
--- PASS: TestForceSystemdFlag (96.43s)

                                                
                                    
x
+
TestForceSystemdEnv (70.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-248976 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-248976 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.846359063s)
helpers_test.go:175: Cleaning up "force-systemd-env-248976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-248976
--- PASS: TestForceSystemdEnv (70.84s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.91s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0914 22:34:29.764922   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (3.91s)

                                                
                                    
x
+
TestErrorSpam/setup (45.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-014635 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-014635 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-014635 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-014635 --driver=kvm2  --container-runtime=crio: (45.797029464s)
--- PASS: TestErrorSpam/setup (45.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 start --dry-run
--- PASS: TestErrorSpam/start (0.31s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (2.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 stop: (2.073375475s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-014635 --log_dir /tmp/nospam-014635 stop
--- PASS: TestErrorSpam/stop (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17243-6287/.minikube/files/etc/test/nested/copy/13485/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-337253 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-337253 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m2.961046989s)
--- PASS: TestFunctional/serial/StartWithProxy (62.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-337253 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-337253 --alsologtostderr -v=8: (32.179499094s)
functional_test.go:659: soft start took 32.180185937s for "functional-337253" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-337253 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 cache add registry.k8s.io/pause:3.1: (1.116728961s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 cache add registry.k8s.io/pause:3.3: (1.020333889s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 cache add registry.k8s.io/pause:latest: (1.131583412s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-337253 /tmp/TestFunctionalserialCacheCmdcacheadd_local2112318917/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 cache add minikube-local-cache-test:functional-337253
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 cache add minikube-local-cache-test:functional-337253: (1.790532751s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 cache delete minikube-local-cache-test:functional-337253
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-337253
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-337253 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (193.425734ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 kubectl -- --context functional-337253 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-337253 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-337253 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-337253 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.117472456s)
functional_test.go:757: restart took 34.117627124s for "functional-337253" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-337253 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 logs: (1.230471395s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-337253 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-337253
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-337253: exit status 115 (262.246851ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.73:31853 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-337253 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-337253 config get cpus: exit status 14 (43.160143ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-337253 config get cpus: exit status 14 (44.634189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-337253 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-337253 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20737: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-337253 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-337253 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.34108ms)

                                                
                                                
-- stdout --
	* [functional-337253] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 21:49:05.794892   20635 out.go:296] Setting OutFile to fd 1 ...
	I0914 21:49:05.795032   20635 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:49:05.795038   20635 out.go:309] Setting ErrFile to fd 2...
	I0914 21:49:05.795045   20635 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:49:05.795558   20635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 21:49:05.796350   20635 out.go:303] Setting JSON to false
	I0914 21:49:05.797398   20635 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1888,"bootTime":1694726258,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 21:49:05.797460   20635 start.go:138] virtualization: kvm guest
	I0914 21:49:05.799571   20635 out.go:177] * [functional-337253] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 21:49:05.801049   20635 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 21:49:05.801054   20635 notify.go:220] Checking for updates...
	I0914 21:49:05.802295   20635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 21:49:05.803720   20635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:49:05.805020   20635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:49:05.806342   20635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 21:49:05.807720   20635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 21:49:05.809312   20635 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 21:49:05.809694   20635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:49:05.809730   20635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:49:05.823398   20635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0914 21:49:05.823793   20635 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:49:05.824362   20635 main.go:141] libmachine: Using API Version  1
	I0914 21:49:05.824387   20635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:49:05.824768   20635 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:49:05.824951   20635 main.go:141] libmachine: (functional-337253) Calling .DriverName
	I0914 21:49:05.825161   20635 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 21:49:05.825430   20635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:49:05.825462   20635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:49:05.839252   20635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43527
	I0914 21:49:05.839648   20635 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:49:05.840276   20635 main.go:141] libmachine: Using API Version  1
	I0914 21:49:05.840301   20635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:49:05.840646   20635 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:49:05.840825   20635 main.go:141] libmachine: (functional-337253) Calling .DriverName
	I0914 21:49:05.872139   20635 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 21:49:05.873548   20635 start.go:298] selected driver: kvm2
	I0914 21:49:05.873561   20635 start.go:902] validating driver "kvm2" against &{Name:functional-337253 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-337253 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.73 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:49:05.873676   20635 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 21:49:05.875912   20635 out.go:177] 
	W0914 21:49:05.877297   20635 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 21:49:05.879063   20635 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-337253 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-337253 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-337253 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.792877ms)

                                                
                                                
-- stdout --
	* [functional-337253] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 21:49:12.288163   21062 out.go:296] Setting OutFile to fd 1 ...
	I0914 21:49:12.288333   21062 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:49:12.288347   21062 out.go:309] Setting ErrFile to fd 2...
	I0914 21:49:12.288356   21062 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 21:49:12.288905   21062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 21:49:12.289634   21062 out.go:303] Setting JSON to false
	I0914 21:49:12.290526   21062 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1894,"bootTime":1694726258,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 21:49:12.290582   21062 start.go:138] virtualization: kvm guest
	I0914 21:49:12.292470   21062 out.go:177] * [functional-337253] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0914 21:49:12.293840   21062 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 21:49:12.295159   21062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 21:49:12.293856   21062 notify.go:220] Checking for updates...
	I0914 21:49:12.297693   21062 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 21:49:12.299046   21062 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 21:49:12.300352   21062 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 21:49:12.301552   21062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 21:49:12.303124   21062 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 21:49:12.303528   21062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:49:12.303571   21062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:49:12.317644   21062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0914 21:49:12.318045   21062 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:49:12.318627   21062 main.go:141] libmachine: Using API Version  1
	I0914 21:49:12.318649   21062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:49:12.319036   21062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:49:12.319225   21062 main.go:141] libmachine: (functional-337253) Calling .DriverName
	I0914 21:49:12.319485   21062 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 21:49:12.319758   21062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 21:49:12.319794   21062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 21:49:12.333473   21062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0914 21:49:12.333809   21062 main.go:141] libmachine: () Calling .GetVersion
	I0914 21:49:12.334217   21062 main.go:141] libmachine: Using API Version  1
	I0914 21:49:12.334246   21062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 21:49:12.334616   21062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 21:49:12.334762   21062 main.go:141] libmachine: (functional-337253) Calling .DriverName
	I0914 21:49:12.370087   21062 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0914 21:49:12.371440   21062 start.go:298] selected driver: kvm2
	I0914 21:49:12.371452   21062 start.go:902] validating driver "kvm2" against &{Name:functional-337253 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17243/minikube-v1.31.0-1694625400-17243-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-337253 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.73 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 21:49:12.371576   21062 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 21:49:12.373901   21062 out.go:177] 
	W0914 21:49:12.375590   21062 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 21:49:12.377183   21062 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-337253 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-337253 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-sql2s" [19a08903-7abe-497f-b4a5-dc317b098796] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-sql2s" [19a08903-7abe-497f-b4a5-dc317b098796] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.043577234s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.73:30149
functional_test.go:1674: http://192.168.50.73:30149: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-sql2s

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.73:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.73:30149
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (54.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8a8940b6-45be-4d59-b280-6752ec8b5556] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013081713s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-337253 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-337253 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-337253 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-337253 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-337253 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9e3f317b-2afc-45b0-a48e-ee361b1f3f2c] Pending
helpers_test.go:344: "sp-pod" [9e3f317b-2afc-45b0-a48e-ee361b1f3f2c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9e3f317b-2afc-45b0-a48e-ee361b1f3f2c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.034191885s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-337253 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-337253 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-337253 delete -f testdata/storage-provisioner/pod.yaml: (1.097788122s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-337253 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8db555bf-3404-4eba-b099-39fee339f98a] Pending
helpers_test.go:344: "sp-pod" [8db555bf-3404-4eba-b099-39fee339f98a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8db555bf-3404-4eba-b099-39fee339f98a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.019577106s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-337253 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (54.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh -n functional-337253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 cp functional-337253:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3733758984/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh -n functional-337253 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-337253 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-h949l" [153c2521-2c6f-4f22-b0f4-2879e645bed6] Pending
helpers_test.go:344: "mysql-859648c796-h949l" [153c2521-2c6f-4f22-b0f4-2879e645bed6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-h949l" [153c2521-2c6f-4f22-b0f4-2879e645bed6] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.025160094s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-337253 exec mysql-859648c796-h949l -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-337253 exec mysql-859648c796-h949l -- mysql -ppassword -e "show databases;": exit status 1 (368.099276ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-337253 exec mysql-859648c796-h949l -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-337253 exec mysql-859648c796-h949l -- mysql -ppassword -e "show databases;": exit status 1 (518.500639ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-337253 exec mysql-859648c796-h949l -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13485/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo cat /etc/test/nested/copy/13485/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13485.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo cat /etc/ssl/certs/13485.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13485.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo cat /usr/share/ca-certificates/13485.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/134852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo cat /etc/ssl/certs/134852.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/134852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo cat /usr/share/ca-certificates/134852.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-337253 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-337253 ssh "sudo systemctl is-active docker": exit status 1 (211.719306ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-337253 ssh "sudo systemctl is-active containerd": exit status 1 (210.068859ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-337253 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-337253 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-tsdz6" [34312ec0-e44c-469f-aced-f353fe97c1b6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-tsdz6" [34312ec0-e44c-469f-aced-f353fe97c1b6] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.025360171s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-337253 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-337253
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-337253
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-337253 image ls --format short --alsologtostderr:
I0914 21:49:13.520293   21300 out.go:296] Setting OutFile to fd 1 ...
I0914 21:49:13.520575   21300 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 21:49:13.520588   21300 out.go:309] Setting ErrFile to fd 2...
I0914 21:49:13.520594   21300 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 21:49:13.520813   21300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
I0914 21:49:13.521519   21300 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 21:49:13.521660   21300 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 21:49:13.522171   21300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 21:49:13.522217   21300 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 21:49:13.536799   21300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37803
I0914 21:49:13.537284   21300 main.go:141] libmachine: () Calling .GetVersion
I0914 21:49:13.537939   21300 main.go:141] libmachine: Using API Version  1
I0914 21:49:13.537965   21300 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 21:49:13.538439   21300 main.go:141] libmachine: () Calling .GetMachineName
I0914 21:49:13.538642   21300 main.go:141] libmachine: (functional-337253) Calling .GetState
I0914 21:49:13.540728   21300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 21:49:13.540770   21300 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 21:49:13.554522   21300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43405
I0914 21:49:13.554830   21300 main.go:141] libmachine: () Calling .GetVersion
I0914 21:49:13.555163   21300 main.go:141] libmachine: Using API Version  1
I0914 21:49:13.555184   21300 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 21:49:13.555515   21300 main.go:141] libmachine: () Calling .GetMachineName
I0914 21:49:13.555690   21300 main.go:141] libmachine: (functional-337253) Calling .DriverName
I0914 21:49:13.555872   21300 ssh_runner.go:195] Run: systemctl --version
I0914 21:49:13.555900   21300 main.go:141] libmachine: (functional-337253) Calling .GetSSHHostname
I0914 21:49:13.558618   21300 main.go:141] libmachine: (functional-337253) DBG | domain functional-337253 has defined MAC address 52:54:00:48:23:fa in network mk-functional-337253
I0914 21:49:13.559053   21300 main.go:141] libmachine: (functional-337253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:23:fa", ip: ""} in network mk-functional-337253: {Iface:virbr1 ExpiryTime:2023-09-14 22:46:23 +0000 UTC Type:0 Mac:52:54:00:48:23:fa Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:functional-337253 Clientid:01:52:54:00:48:23:fa}
I0914 21:49:13.559075   21300 main.go:141] libmachine: (functional-337253) DBG | domain functional-337253 has defined IP address 192.168.50.73 and MAC address 52:54:00:48:23:fa in network mk-functional-337253
I0914 21:49:13.559214   21300 main.go:141] libmachine: (functional-337253) Calling .GetSSHPort
I0914 21:49:13.559349   21300 main.go:141] libmachine: (functional-337253) Calling .GetSSHKeyPath
I0914 21:49:13.559437   21300 main.go:141] libmachine: (functional-337253) Calling .GetSSHUsername
I0914 21:49:13.559558   21300 sshutil.go:53] new ssh client: &{IP:192.168.50.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/functional-337253/id_rsa Username:docker}
I0914 21:49:13.637055   21300 ssh_runner.go:195] Run: sudo crictl images --output json
I0914 21:49:13.668194   21300 main.go:141] libmachine: Making call to close driver server
I0914 21:49:13.668212   21300 main.go:141] libmachine: (functional-337253) Calling .Close
I0914 21:49:13.668532   21300 main.go:141] libmachine: (functional-337253) DBG | Closing plugin on server side
I0914 21:49:13.668552   21300 main.go:141] libmachine: Successfully made call to close driver server
I0914 21:49:13.668566   21300 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 21:49:13.668593   21300 main.go:141] libmachine: Making call to close driver server
I0914 21:49:13.668604   21300 main.go:141] libmachine: (functional-337253) Calling .Close
I0914 21:49:13.668826   21300 main.go:141] libmachine: (functional-337253) DBG | Closing plugin on server side
I0914 21:49:13.668924   21300 main.go:141] libmachine: Successfully made call to close driver server
I0914 21:49:13.668965   21300 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-337253 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-337253  | ffd4cfbbe753e | 34.1MB |
| localhost/minikube-local-cache-test     | functional-337253  | b48cb842e7ba0 | 3.35kB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-proxy              | v1.28.1            | 6cdbabde3874e | 74.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| docker.io/library/nginx                 | latest             | f5a6b296b8a29 | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.1            | 821b3dfea27be | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.1            | b462ce0c8b1ff | 61.5MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| registry.k8s.io/kube-apiserver          | v1.28.1            | 5c801295c21d0 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-337253 image ls --format table --alsologtostderr:
I0914 21:49:14.378241   21525 out.go:296] Setting OutFile to fd 1 ...
I0914 21:49:14.378510   21525 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 21:49:14.378524   21525 out.go:309] Setting ErrFile to fd 2...
I0914 21:49:14.378532   21525 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 21:49:14.378835   21525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
I0914 21:49:14.379651   21525 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 21:49:14.379809   21525 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 21:49:14.380361   21525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 21:49:14.380411   21525 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 21:49:14.394583   21525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
I0914 21:49:14.395104   21525 main.go:141] libmachine: () Calling .GetVersion
I0914 21:49:14.395628   21525 main.go:141] libmachine: Using API Version  1
I0914 21:49:14.395651   21525 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 21:49:14.395990   21525 main.go:141] libmachine: () Calling .GetMachineName
I0914 21:49:14.396178   21525 main.go:141] libmachine: (functional-337253) Calling .GetState
I0914 21:49:14.397926   21525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 21:49:14.397969   21525 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 21:49:14.412070   21525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34903
I0914 21:49:14.412488   21525 main.go:141] libmachine: () Calling .GetVersion
I0914 21:49:14.412976   21525 main.go:141] libmachine: Using API Version  1
I0914 21:49:14.413001   21525 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 21:49:14.413366   21525 main.go:141] libmachine: () Calling .GetMachineName
I0914 21:49:14.413571   21525 main.go:141] libmachine: (functional-337253) Calling .DriverName
I0914 21:49:14.413802   21525 ssh_runner.go:195] Run: systemctl --version
I0914 21:49:14.413829   21525 main.go:141] libmachine: (functional-337253) Calling .GetSSHHostname
I0914 21:49:14.416490   21525 main.go:141] libmachine: (functional-337253) DBG | domain functional-337253 has defined MAC address 52:54:00:48:23:fa in network mk-functional-337253
I0914 21:49:14.416875   21525 main.go:141] libmachine: (functional-337253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:23:fa", ip: ""} in network mk-functional-337253: {Iface:virbr1 ExpiryTime:2023-09-14 22:46:23 +0000 UTC Type:0 Mac:52:54:00:48:23:fa Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:functional-337253 Clientid:01:52:54:00:48:23:fa}
I0914 21:49:14.416918   21525 main.go:141] libmachine: (functional-337253) DBG | domain functional-337253 has defined IP address 192.168.50.73 and MAC address 52:54:00:48:23:fa in network mk-functional-337253
I0914 21:49:14.416980   21525 main.go:141] libmachine: (functional-337253) Calling .GetSSHPort
I0914 21:49:14.417198   21525 main.go:141] libmachine: (functional-337253) Calling .GetSSHKeyPath
I0914 21:49:14.417379   21525 main.go:141] libmachine: (functional-337253) Calling .GetSSHUsername
I0914 21:49:14.417522   21525 sshutil.go:53] new ssh client: &{IP:192.168.50.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/functional-337253/id_rsa Username:docker}
I0914 21:49:14.496952   21525 ssh_runner.go:195] Run: sudo crictl images --output json
I0914 21:49:14.528313   21525 main.go:141] libmachine: Making call to close driver server
I0914 21:49:14.528329   21525 main.go:141] libmachine: (functional-337253) Calling .Close
I0914 21:49:14.528580   21525 main.go:141] libmachine: Successfully made call to close driver server
I0914 21:49:14.528603   21525 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 21:49:14.528617   21525 main.go:141] libmachine: (functional-337253) DBG | Closing plugin on server side
I0914 21:49:14.528620   21525 main.go:141] libmachine: Making call to close driver server
I0914 21:49:14.528674   21525 main.go:141] libmachine: (functional-337253) Calling .Close
I0914 21:49:14.528972   21525 main.go:141] libmachine: (functional-337253) DBG | Closing plugin on server side
I0914 21:49:14.529056   21525 main.go:141] libmachine: Successfully made call to close driver server
I0914 21:49:14.529079   21525 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-337253 image ls --format json --alsologtostderr:
[{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-337253"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/s
torage-provisioner:v5"],"size":"31470524"},{"id":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":["registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774","registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"126972880"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4","registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cb
a29052bbb1131890bf91d06bf1e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"61477686"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"},{"id":"f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9","repoDigests":["docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153","docker.io/library/nginx@sha256:9504f3f64a3f16f0eaf9adca3542ff8b2a6880e6abfb13e478cca23f6380080a"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820093"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha
256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830","registry.k8s.io/kube-controller-manager@sha256
:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"123163446"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":["registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3","registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"74680215"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9
d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"b48cb842e7ba058298955d1e89dbc16d909bad823b13bfe7980d5431eb47c51d","repoDigests":["localhost/minikube-local-cache-test@sha256:cc6913368245807c1fa780badf28e53332392f2d56b45f73c99adc69bffc3d7e"],"repoTags":["localhost/minikube-local-cache-test:functional-337253"],"size":"3345"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","rep
oDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-337253 image ls --format json --alsologtostderr:
I0914 21:49:14.150281   21490 out.go:296] Setting OutFile to fd 1 ...
I0914 21:49:14.150527   21490 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 21:49:14.150536   21490 out.go:309] Setting ErrFile to fd 2...
I0914 21:49:14.150541   21490 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 21:49:14.150728   21490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
I0914 21:49:14.151245   21490 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 21:49:14.151336   21490 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 21:49:14.151705   21490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 21:49:14.151752   21490 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 21:49:14.165750   21490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
I0914 21:49:14.166190   21490 main.go:141] libmachine: () Calling .GetVersion
I0914 21:49:14.166757   21490 main.go:141] libmachine: Using API Version  1
I0914 21:49:14.166778   21490 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 21:49:14.167071   21490 main.go:141] libmachine: () Calling .GetMachineName
I0914 21:49:14.167210   21490 main.go:141] libmachine: (functional-337253) Calling .GetState
I0914 21:49:14.168955   21490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 21:49:14.169017   21490 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 21:49:14.182305   21490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
I0914 21:49:14.182628   21490 main.go:141] libmachine: () Calling .GetVersion
I0914 21:49:14.183020   21490 main.go:141] libmachine: Using API Version  1
I0914 21:49:14.183037   21490 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 21:49:14.183307   21490 main.go:141] libmachine: () Calling .GetMachineName
I0914 21:49:14.183475   21490 main.go:141] libmachine: (functional-337253) Calling .DriverName
I0914 21:49:14.183649   21490 ssh_runner.go:195] Run: systemctl --version
I0914 21:49:14.183669   21490 main.go:141] libmachine: (functional-337253) Calling .GetSSHHostname
I0914 21:49:14.186188   21490 main.go:141] libmachine: (functional-337253) DBG | domain functional-337253 has defined MAC address 52:54:00:48:23:fa in network mk-functional-337253
I0914 21:49:14.186577   21490 main.go:141] libmachine: (functional-337253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:23:fa", ip: ""} in network mk-functional-337253: {Iface:virbr1 ExpiryTime:2023-09-14 22:46:23 +0000 UTC Type:0 Mac:52:54:00:48:23:fa Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:functional-337253 Clientid:01:52:54:00:48:23:fa}
I0914 21:49:14.186604   21490 main.go:141] libmachine: (functional-337253) DBG | domain functional-337253 has defined IP address 192.168.50.73 and MAC address 52:54:00:48:23:fa in network mk-functional-337253
I0914 21:49:14.186726   21490 main.go:141] libmachine: (functional-337253) Calling .GetSSHPort
I0914 21:49:14.187003   21490 main.go:141] libmachine: (functional-337253) Calling .GetSSHKeyPath
I0914 21:49:14.187146   21490 main.go:141] libmachine: (functional-337253) Calling .GetSSHUsername
I0914 21:49:14.187281   21490 sshutil.go:53] new ssh client: &{IP:192.168.50.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/functional-337253/id_rsa Username:docker}
I0914 21:49:14.278228   21490 ssh_runner.go:195] Run: sudo crictl images --output json
I0914 21:49:14.321980   21490 main.go:141] libmachine: Making call to close driver server
I0914 21:49:14.321992   21490 main.go:141] libmachine: (functional-337253) Calling .Close
I0914 21:49:14.322262   21490 main.go:141] libmachine: (functional-337253) DBG | Closing plugin on server side
I0914 21:49:14.322275   21490 main.go:141] libmachine: Successfully made call to close driver server
I0914 21:49:14.322290   21490 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 21:49:14.322305   21490 main.go:141] libmachine: Making call to close driver server
I0914 21:49:14.322313   21490 main.go:141] libmachine: (functional-337253) Calling .Close
I0914 21:49:14.322529   21490 main.go:141] libmachine: (functional-337253) DBG | Closing plugin on server side
I0914 21:49:14.322546   21490 main.go:141] libmachine: Successfully made call to close driver server
I0914 21:49:14.322558   21490 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-337253 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: b48cb842e7ba058298955d1e89dbc16d909bad823b13bfe7980d5431eb47c51d
repoDigests:
- localhost/minikube-local-cache-test@sha256:cc6913368245807c1fa780badf28e53332392f2d56b45f73c99adc69bffc3d7e
repoTags:
- localhost/minikube-local-cache-test:functional-337253
size: "3345"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "123163446"
- id: 6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "74680215"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "126972880"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9
repoDigests:
- docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153
- docker.io/library/nginx@sha256:9504f3f64a3f16f0eaf9adca3542ff8b2a6880e6abfb13e478cca23f6380080a
repoTags:
- docker.io/library/nginx:latest
size: "190820093"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-337253
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
- registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "61477686"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-337253 image ls --format yaml --alsologtostderr:
I0914 21:49:13.715022   21354 out.go:296] Setting OutFile to fd 1 ...
I0914 21:49:13.715235   21354 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 21:49:13.715243   21354 out.go:309] Setting ErrFile to fd 2...
I0914 21:49:13.715248   21354 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 21:49:13.715437   21354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
I0914 21:49:13.715986   21354 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 21:49:13.716079   21354 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 21:49:13.716410   21354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 21:49:13.716449   21354 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 21:49:13.729705   21354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36245
I0914 21:49:13.730162   21354 main.go:141] libmachine: () Calling .GetVersion
I0914 21:49:13.730760   21354 main.go:141] libmachine: Using API Version  1
I0914 21:49:13.730788   21354 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 21:49:13.731138   21354 main.go:141] libmachine: () Calling .GetMachineName
I0914 21:49:13.731333   21354 main.go:141] libmachine: (functional-337253) Calling .GetState
I0914 21:49:13.733374   21354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 21:49:13.733425   21354 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 21:49:13.746546   21354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
I0914 21:49:13.746872   21354 main.go:141] libmachine: () Calling .GetVersion
I0914 21:49:13.747348   21354 main.go:141] libmachine: Using API Version  1
I0914 21:49:13.747367   21354 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 21:49:13.747704   21354 main.go:141] libmachine: () Calling .GetMachineName
I0914 21:49:13.747857   21354 main.go:141] libmachine: (functional-337253) Calling .DriverName
I0914 21:49:13.748004   21354 ssh_runner.go:195] Run: systemctl --version
I0914 21:49:13.748025   21354 main.go:141] libmachine: (functional-337253) Calling .GetSSHHostname
I0914 21:49:13.750787   21354 main.go:141] libmachine: (functional-337253) DBG | domain functional-337253 has defined MAC address 52:54:00:48:23:fa in network mk-functional-337253
I0914 21:49:13.751121   21354 main.go:141] libmachine: (functional-337253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:23:fa", ip: ""} in network mk-functional-337253: {Iface:virbr1 ExpiryTime:2023-09-14 22:46:23 +0000 UTC Type:0 Mac:52:54:00:48:23:fa Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:functional-337253 Clientid:01:52:54:00:48:23:fa}
I0914 21:49:13.751150   21354 main.go:141] libmachine: (functional-337253) DBG | domain functional-337253 has defined IP address 192.168.50.73 and MAC address 52:54:00:48:23:fa in network mk-functional-337253
I0914 21:49:13.751329   21354 main.go:141] libmachine: (functional-337253) Calling .GetSSHPort
I0914 21:49:13.751483   21354 main.go:141] libmachine: (functional-337253) Calling .GetSSHKeyPath
I0914 21:49:13.751637   21354 main.go:141] libmachine: (functional-337253) Calling .GetSSHUsername
I0914 21:49:13.751806   21354 sshutil.go:53] new ssh client: &{IP:192.168.50.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/functional-337253/id_rsa Username:docker}
I0914 21:49:13.849659   21354 ssh_runner.go:195] Run: sudo crictl images --output json
I0914 21:49:13.889849   21354 main.go:141] libmachine: Making call to close driver server
I0914 21:49:13.889866   21354 main.go:141] libmachine: (functional-337253) Calling .Close
I0914 21:49:13.890133   21354 main.go:141] libmachine: Successfully made call to close driver server
I0914 21:49:13.890154   21354 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 21:49:13.890170   21354 main.go:141] libmachine: Making call to close driver server
I0914 21:49:13.890178   21354 main.go:141] libmachine: (functional-337253) DBG | Closing plugin on server side
I0914 21:49:13.890183   21354 main.go:141] libmachine: (functional-337253) Calling .Close
I0914 21:49:13.890416   21354 main.go:141] libmachine: Successfully made call to close driver server
I0914 21:49:13.890435   21354 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-337253 ssh pgrep buildkitd: exit status 1 (182.535817ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image build -t localhost/my-image:functional-337253 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 image build -t localhost/my-image:functional-337253 testdata/build --alsologtostderr: (6.243220813s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-337253 image build -t localhost/my-image:functional-337253 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a7c38d40ca7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-337253
--> 9d0f804128f
Successfully tagged localhost/my-image:functional-337253
9d0f804128f43e7e1fcfc0e246751985a8c12172f1d8a6ddd61de9b030c61d8f
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-337253 image build -t localhost/my-image:functional-337253 testdata/build --alsologtostderr:
I0914 21:49:14.124452   21478 out.go:296] Setting OutFile to fd 1 ...
I0914 21:49:14.124576   21478 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 21:49:14.124585   21478 out.go:309] Setting ErrFile to fd 2...
I0914 21:49:14.124590   21478 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 21:49:14.124741   21478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
I0914 21:49:14.125243   21478 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 21:49:14.125736   21478 config.go:182] Loaded profile config "functional-337253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 21:49:14.126116   21478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 21:49:14.126154   21478 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 21:49:14.141243   21478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
I0914 21:49:14.141669   21478 main.go:141] libmachine: () Calling .GetVersion
I0914 21:49:14.142187   21478 main.go:141] libmachine: Using API Version  1
I0914 21:49:14.142208   21478 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 21:49:14.142560   21478 main.go:141] libmachine: () Calling .GetMachineName
I0914 21:49:14.142763   21478 main.go:141] libmachine: (functional-337253) Calling .GetState
I0914 21:49:14.144579   21478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 21:49:14.144629   21478 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 21:49:14.159235   21478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44371
I0914 21:49:14.159680   21478 main.go:141] libmachine: () Calling .GetVersion
I0914 21:49:14.160077   21478 main.go:141] libmachine: Using API Version  1
I0914 21:49:14.160098   21478 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 21:49:14.160379   21478 main.go:141] libmachine: () Calling .GetMachineName
I0914 21:49:14.160572   21478 main.go:141] libmachine: (functional-337253) Calling .DriverName
I0914 21:49:14.160753   21478 ssh_runner.go:195] Run: systemctl --version
I0914 21:49:14.160779   21478 main.go:141] libmachine: (functional-337253) Calling .GetSSHHostname
I0914 21:49:14.163289   21478 main.go:141] libmachine: (functional-337253) DBG | domain functional-337253 has defined MAC address 52:54:00:48:23:fa in network mk-functional-337253
I0914 21:49:14.163800   21478 main.go:141] libmachine: (functional-337253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:23:fa", ip: ""} in network mk-functional-337253: {Iface:virbr1 ExpiryTime:2023-09-14 22:46:23 +0000 UTC Type:0 Mac:52:54:00:48:23:fa Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:functional-337253 Clientid:01:52:54:00:48:23:fa}
I0914 21:49:14.163843   21478 main.go:141] libmachine: (functional-337253) DBG | domain functional-337253 has defined IP address 192.168.50.73 and MAC address 52:54:00:48:23:fa in network mk-functional-337253
I0914 21:49:14.163919   21478 main.go:141] libmachine: (functional-337253) Calling .GetSSHPort
I0914 21:49:14.164090   21478 main.go:141] libmachine: (functional-337253) Calling .GetSSHKeyPath
I0914 21:49:14.164272   21478 main.go:141] libmachine: (functional-337253) Calling .GetSSHUsername
I0914 21:49:14.164421   21478 sshutil.go:53] new ssh client: &{IP:192.168.50.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/functional-337253/id_rsa Username:docker}
I0914 21:49:14.245865   21478 build_images.go:151] Building image from path: /tmp/build.3238183206.tar
I0914 21:49:14.245924   21478 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 21:49:14.254903   21478 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3238183206.tar
I0914 21:49:14.259346   21478 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3238183206.tar: stat -c "%s %y" /var/lib/minikube/build/build.3238183206.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3238183206.tar': No such file or directory
I0914 21:49:14.259374   21478 ssh_runner.go:362] scp /tmp/build.3238183206.tar --> /var/lib/minikube/build/build.3238183206.tar (3072 bytes)
I0914 21:49:14.290144   21478 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3238183206
I0914 21:49:14.302914   21478 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3238183206 -xf /var/lib/minikube/build/build.3238183206.tar
I0914 21:49:14.319053   21478 crio.go:297] Building image: /var/lib/minikube/build/build.3238183206
I0914 21:49:14.319107   21478 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-337253 /var/lib/minikube/build/build.3238183206 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0914 21:49:20.295854   21478 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-337253 /var/lib/minikube/build/build.3238183206 --cgroup-manager=cgroupfs: (5.976718937s)
I0914 21:49:20.295922   21478 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3238183206
I0914 21:49:20.306783   21478 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3238183206.tar
I0914 21:49:20.317374   21478 build_images.go:207] Built localhost/my-image:functional-337253 from /tmp/build.3238183206.tar
I0914 21:49:20.317402   21478 build_images.go:123] succeeded building to: functional-337253
I0914 21:49:20.317409   21478 build_images.go:124] failed building to: 
I0914 21:49:20.317436   21478 main.go:141] libmachine: Making call to close driver server
I0914 21:49:20.317451   21478 main.go:141] libmachine: (functional-337253) Calling .Close
I0914 21:49:20.317691   21478 main.go:141] libmachine: Successfully made call to close driver server
I0914 21:49:20.317710   21478 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 21:49:20.317716   21478 main.go:141] libmachine: (functional-337253) DBG | Closing plugin on server side
I0914 21:49:20.317732   21478 main.go:141] libmachine: Making call to close driver server
I0914 21:49:20.317742   21478 main.go:141] libmachine: (functional-337253) Calling .Close
I0914 21:49:20.318042   21478 main.go:141] libmachine: Successfully made call to close driver server
I0914 21:49:20.318058   21478 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 21:49:20.318089   21478 main.go:141] libmachine: (functional-337253) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image ls
2023/09/14 21:49:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.008245623s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-337253
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image load --daemon gcr.io/google-containers/addon-resizer:functional-337253 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 image load --daemon gcr.io/google-containers/addon-resizer:functional-337253 --alsologtostderr: (4.226110312s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image load --daemon gcr.io/google-containers/addon-resizer:functional-337253 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 image load --daemon gcr.io/google-containers/addon-resizer:functional-337253 --alsologtostderr: (2.405205031s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (13.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.939491469s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-337253
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image load --daemon gcr.io/google-containers/addon-resizer:functional-337253 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 image load --daemon gcr.io/google-containers/addon-resizer:functional-337253 --alsologtostderr: (11.246336701s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (13.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 service list -o json
functional_test.go:1493: Took "305.763197ms" to run "out/minikube-linux-amd64 -p functional-337253 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.73:30539
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.73:30539
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "283.981023ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "40.480815ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "221.772089ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "39.014813ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (23.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdany-port3543658839/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694728127144356797" to /tmp/TestFunctionalparallelMountCmdany-port3543658839/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694728127144356797" to /tmp/TestFunctionalparallelMountCmdany-port3543658839/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694728127144356797" to /tmp/TestFunctionalparallelMountCmdany-port3543658839/001/test-1694728127144356797
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.241451ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 21:48 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 21:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 21:48 test-1694728127144356797
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh cat /mount-9p/test-1694728127144356797
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-337253 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [20b9fcbe-e0d8-4b3f-a392-c85252d6f587] Pending
helpers_test.go:344: "busybox-mount" [20b9fcbe-e0d8-4b3f-a392-c85252d6f587] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [20b9fcbe-e0d8-4b3f-a392-c85252d6f587] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [20b9fcbe-e0d8-4b3f-a392-c85252d6f587] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 21.064631573s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-337253 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdany-port3543658839/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (23.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image save gcr.io/google-containers/addon-resizer:functional-337253 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 image save gcr.io/google-containers/addon-resizer:functional-337253 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.232709696s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image rm gcr.io/google-containers/addon-resizer:functional-337253 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.926122131s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-337253
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 image save --daemon gcr.io/google-containers/addon-resizer:functional-337253 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-337253 image save --daemon gcr.io/google-containers/addon-resizer:functional-337253 --alsologtostderr: (1.181391265s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-337253
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdspecific-port432785940/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.464453ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdspecific-port432785940/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-337253 ssh "sudo umount -f /mount-9p": exit status 1 (235.683957ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-337253 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdspecific-port432785940/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3207744311/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3207744311/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3207744311/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T" /mount1: exit status 1 (257.484839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-337253 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-337253 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3207744311/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3207744311/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-337253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3207744311/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-337253
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-337253
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-337253
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (112.99s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-235631 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0914 21:49:29.764419   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:29.770064   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:29.780270   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:29.800505   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:29.840788   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:29.921163   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:30.081551   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:30.402108   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:31.043067   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:32.323998   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:34.884672   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:40.005202   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:49:50.246112   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:50:10.726450   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 21:50:51.686911   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-235631 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m52.993302304s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (112.99s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-235631 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-235631 addons enable ingress --alsologtostderr -v=5: (14.416925812s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-235631 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-817931 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0914 21:54:54.115042   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:54:57.448844   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-817931 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (59.393922231s)
--- PASS: TestJSONOutput/start/Command (59.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-817931 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-817931 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-817931 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-817931 --output=json --user=testUser: (7.093289037s)
--- PASS: TestJSONOutput/stop/Command (7.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-637044 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-637044 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.287867ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0ed3286-34a4-4e6a-9f23-3d37d4ec275e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-637044] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bbce69a0-9ce1-4d5c-92ef-1f820f183211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17243"}}
	{"specversion":"1.0","id":"2af446db-066e-45ae-be71-9a29bcaad406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f657a6df-080f-4df3-b0b6-ac2e04b38646","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig"}}
	{"specversion":"1.0","id":"f18d222c-0a6f-4486-b5a8-7a2bb298865b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube"}}
	{"specversion":"1.0","id":"c18cb7c7-0062-4c84-a87a-16fbc2ec089e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"aa8b4daf-84de-418d-84c1-0761c31f4082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"83c58d4f-a58c-4c12-bce8-f054e63ca2ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-637044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-637044
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (94.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-396784 --driver=kvm2  --container-runtime=crio
E0914 21:56:16.036216   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-396784 --driver=kvm2  --container-runtime=crio: (44.719099309s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-399360 --driver=kvm2  --container-runtime=crio
E0914 21:56:36.475709   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:36.480973   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:36.491303   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:36.511603   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:36.551898   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:36.632296   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:36.792705   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:37.113275   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:37.753690   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:39.034270   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:41.596199   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:46.716823   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:56:56.957066   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-399360 --driver=kvm2  --container-runtime=crio: (47.58528069s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-396784
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-399360
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-399360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-399360
helpers_test.go:175: Cleaning up "first-396784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-396784
--- PASS: TestMinikubeProfile (94.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-906327 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0914 21:57:17.437630   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-906327 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.269685354s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-906327 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-906327 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-919267 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0914 21:57:58.398338   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-919267 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.497473766s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-919267 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-919267 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-906327 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-919267 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-919267 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.09s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-919267
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-919267: (1.089470756s)
--- PASS: TestMountStart/serial/Stop (1.09s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-919267
E0914 21:58:32.188673   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-919267: (21.14153814s)
--- PASS: TestMountStart/serial/RestartStopped (22.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-919267 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-919267 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-124911 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0914 21:58:59.876640   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 21:59:20.319319   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 21:59:29.764465   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-124911 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m15.984776308s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-124911 -- rollout status deployment/busybox: (4.461682406s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-lv55w -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-pmkvp -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-lv55w -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-pmkvp -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-lv55w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-124911 -- exec busybox-5bc68d56bd-pmkvp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-124911 -v 3 --alsologtostderr
E0914 22:01:36.475458   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-124911 -v 3 --alsologtostderr: (43.071218412s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.60s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp testdata/cp-test.txt multinode-124911:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp multinode-124911:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1415921513/001/cp-test_multinode-124911.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp multinode-124911:/home/docker/cp-test.txt multinode-124911-m02:/home/docker/cp-test_multinode-124911_multinode-124911-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m02 "sudo cat /home/docker/cp-test_multinode-124911_multinode-124911-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp multinode-124911:/home/docker/cp-test.txt multinode-124911-m03:/home/docker/cp-test_multinode-124911_multinode-124911-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m03 "sudo cat /home/docker/cp-test_multinode-124911_multinode-124911-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp testdata/cp-test.txt multinode-124911-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp multinode-124911-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1415921513/001/cp-test_multinode-124911-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp multinode-124911-m02:/home/docker/cp-test.txt multinode-124911:/home/docker/cp-test_multinode-124911-m02_multinode-124911.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911 "sudo cat /home/docker/cp-test_multinode-124911-m02_multinode-124911.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp multinode-124911-m02:/home/docker/cp-test.txt multinode-124911-m03:/home/docker/cp-test_multinode-124911-m02_multinode-124911-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m03 "sudo cat /home/docker/cp-test_multinode-124911-m02_multinode-124911-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp testdata/cp-test.txt multinode-124911-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp multinode-124911-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1415921513/001/cp-test_multinode-124911-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp multinode-124911-m03:/home/docker/cp-test.txt multinode-124911:/home/docker/cp-test_multinode-124911-m03_multinode-124911.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911 "sudo cat /home/docker/cp-test_multinode-124911-m03_multinode-124911.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 cp multinode-124911-m03:/home/docker/cp-test.txt multinode-124911-m02:/home/docker/cp-test_multinode-124911-m03_multinode-124911-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 ssh -n multinode-124911-m02 "sudo cat /home/docker/cp-test_multinode-124911-m03_multinode-124911-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-124911 node stop m03: (1.403982191s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-124911 status: exit status 7 (397.37459ms)

                                                
                                                
-- stdout --
	multinode-124911
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-124911-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-124911-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-124911 status --alsologtostderr: exit status 7 (404.171457ms)

                                                
                                                
-- stdout --
	multinode-124911
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-124911-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-124911-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:01:55.135209   28457 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:01:55.135439   28457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:01:55.135448   28457 out.go:309] Setting ErrFile to fd 2...
	I0914 22:01:55.135453   28457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:01:55.135693   28457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:01:55.135898   28457 out.go:303] Setting JSON to false
	I0914 22:01:55.135933   28457 mustload.go:65] Loading cluster: multinode-124911
	I0914 22:01:55.136030   28457 notify.go:220] Checking for updates...
	I0914 22:01:55.136368   28457 config.go:182] Loaded profile config "multinode-124911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:01:55.136382   28457 status.go:255] checking status of multinode-124911 ...
	I0914 22:01:55.136762   28457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:01:55.136820   28457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:01:55.151723   28457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I0914 22:01:55.152158   28457 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:01:55.152790   28457 main.go:141] libmachine: Using API Version  1
	I0914 22:01:55.152829   28457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:01:55.153134   28457 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:01:55.153321   28457 main.go:141] libmachine: (multinode-124911) Calling .GetState
	I0914 22:01:55.155215   28457 status.go:330] multinode-124911 host status = "Running" (err=<nil>)
	I0914 22:01:55.155235   28457 host.go:66] Checking if "multinode-124911" exists ...
	I0914 22:01:55.155738   28457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:01:55.155821   28457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:01:55.171212   28457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I0914 22:01:55.171613   28457 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:01:55.171980   28457 main.go:141] libmachine: Using API Version  1
	I0914 22:01:55.172002   28457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:01:55.172274   28457 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:01:55.172420   28457 main.go:141] libmachine: (multinode-124911) Calling .GetIP
	I0914 22:01:55.174813   28457 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:01:55.175196   28457 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:01:55.175228   28457 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:01:55.175338   28457 host.go:66] Checking if "multinode-124911" exists ...
	I0914 22:01:55.175742   28457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:01:55.175811   28457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:01:55.189502   28457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36371
	I0914 22:01:55.189854   28457 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:01:55.190268   28457 main.go:141] libmachine: Using API Version  1
	I0914 22:01:55.190291   28457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:01:55.190604   28457 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:01:55.190777   28457 main.go:141] libmachine: (multinode-124911) Calling .DriverName
	I0914 22:01:55.190933   28457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 22:01:55.190961   28457 main.go:141] libmachine: (multinode-124911) Calling .GetSSHHostname
	I0914 22:01:55.193582   28457 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:01:55.193989   28457 main.go:141] libmachine: (multinode-124911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:3f:c1", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 22:58:52 +0000 UTC Type:0 Mac:52:54:00:97:3f:c1 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-124911 Clientid:01:52:54:00:97:3f:c1}
	I0914 22:01:55.194021   28457 main.go:141] libmachine: (multinode-124911) DBG | domain multinode-124911 has defined IP address 192.168.39.116 and MAC address 52:54:00:97:3f:c1 in network mk-multinode-124911
	I0914 22:01:55.194158   28457 main.go:141] libmachine: (multinode-124911) Calling .GetSSHPort
	I0914 22:01:55.194300   28457 main.go:141] libmachine: (multinode-124911) Calling .GetSSHKeyPath
	I0914 22:01:55.194437   28457 main.go:141] libmachine: (multinode-124911) Calling .GetSSHUsername
	I0914 22:01:55.194542   28457 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911/id_rsa Username:docker}
	I0914 22:01:55.278149   28457 ssh_runner.go:195] Run: systemctl --version
	I0914 22:01:55.283835   28457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:01:55.296903   28457 kubeconfig.go:92] found "multinode-124911" server: "https://192.168.39.116:8443"
	I0914 22:01:55.296927   28457 api_server.go:166] Checking apiserver status ...
	I0914 22:01:55.296956   28457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:01:55.309961   28457 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1097/cgroup
	I0914 22:01:55.322713   28457 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/pod45ad3e9dc71d2c9a455002dbdc235854/crio-3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56"
	I0914 22:01:55.322781   28457 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod45ad3e9dc71d2c9a455002dbdc235854/crio-3ac5473f8a18b59469985f1b0d2124312046a0f90c42af2acf8373f566ae4a56/freezer.state
	I0914 22:01:55.331518   28457 api_server.go:204] freezer state: "THAWED"
	I0914 22:01:55.331547   28457 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0914 22:01:55.336205   28457 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I0914 22:01:55.336224   28457 status.go:421] multinode-124911 apiserver status = Running (err=<nil>)
	I0914 22:01:55.336231   28457 status.go:257] multinode-124911 status: &{Name:multinode-124911 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 22:01:55.336244   28457 status.go:255] checking status of multinode-124911-m02 ...
	I0914 22:01:55.336506   28457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:01:55.336530   28457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:01:55.350944   28457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0914 22:01:55.351312   28457 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:01:55.351692   28457 main.go:141] libmachine: Using API Version  1
	I0914 22:01:55.351719   28457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:01:55.352012   28457 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:01:55.352190   28457 main.go:141] libmachine: (multinode-124911-m02) Calling .GetState
	I0914 22:01:55.353689   28457 status.go:330] multinode-124911-m02 host status = "Running" (err=<nil>)
	I0914 22:01:55.353702   28457 host.go:66] Checking if "multinode-124911-m02" exists ...
	I0914 22:01:55.354054   28457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:01:55.354080   28457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:01:55.367906   28457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34395
	I0914 22:01:55.368237   28457 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:01:55.368660   28457 main.go:141] libmachine: Using API Version  1
	I0914 22:01:55.368683   28457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:01:55.368984   28457 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:01:55.369147   28457 main.go:141] libmachine: (multinode-124911-m02) Calling .GetIP
	I0914 22:01:55.371664   28457 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:01:55.372088   28457 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:01:55.372115   28457 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:01:55.372224   28457 host.go:66] Checking if "multinode-124911-m02" exists ...
	I0914 22:01:55.372477   28457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:01:55.372512   28457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:01:55.386375   28457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0914 22:01:55.386709   28457 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:01:55.387155   28457 main.go:141] libmachine: Using API Version  1
	I0914 22:01:55.387174   28457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:01:55.387454   28457 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:01:55.387636   28457 main.go:141] libmachine: (multinode-124911-m02) Calling .DriverName
	I0914 22:01:55.387791   28457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 22:01:55.387808   28457 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHHostname
	I0914 22:01:55.390317   28457 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:01:55.390733   28457 main.go:141] libmachine: (multinode-124911-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:38:83", ip: ""} in network mk-multinode-124911: {Iface:virbr1 ExpiryTime:2023-09-14 23:00:00 +0000 UTC Type:0 Mac:52:54:00:55:38:83 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-124911-m02 Clientid:01:52:54:00:55:38:83}
	I0914 22:01:55.390772   28457 main.go:141] libmachine: (multinode-124911-m02) DBG | domain multinode-124911-m02 has defined IP address 192.168.39.254 and MAC address 52:54:00:55:38:83 in network mk-multinode-124911
	I0914 22:01:55.390951   28457 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHPort
	I0914 22:01:55.391143   28457 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHKeyPath
	I0914 22:01:55.391297   28457 main.go:141] libmachine: (multinode-124911-m02) Calling .GetSSHUsername
	I0914 22:01:55.391431   28457 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17243-6287/.minikube/machines/multinode-124911-m02/id_rsa Username:docker}
	I0914 22:01:55.473753   28457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:01:55.485093   28457 status.go:257] multinode-124911-m02 status: &{Name:multinode-124911-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 22:01:55.485116   28457 status.go:255] checking status of multinode-124911-m03 ...
	I0914 22:01:55.485386   28457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 22:01:55.485410   28457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 22:01:55.499895   28457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0914 22:01:55.500271   28457 main.go:141] libmachine: () Calling .GetVersion
	I0914 22:01:55.500712   28457 main.go:141] libmachine: Using API Version  1
	I0914 22:01:55.500731   28457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 22:01:55.501008   28457 main.go:141] libmachine: () Calling .GetMachineName
	I0914 22:01:55.501170   28457 main.go:141] libmachine: (multinode-124911-m03) Calling .GetState
	I0914 22:01:55.502554   28457 status.go:330] multinode-124911-m03 host status = "Stopped" (err=<nil>)
	I0914 22:01:55.502566   28457 status.go:343] host is not running, skipping remaining checks
	I0914 22:01:55.502571   28457 status.go:257] multinode-124911-m03 status: &{Name:multinode-124911-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 node start m03 --alsologtostderr
E0914 22:02:04.159604   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-124911 node start m03 --alsologtostderr: (28.895940792s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-124911 node delete m03: (1.193468236s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (444.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-124911 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0914 22:16:36.476178   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 22:18:32.189726   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 22:19:29.764719   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:21:36.475248   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 22:22:32.810204   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:23:32.189353   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-124911 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m23.480839379s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-124911 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (444.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-124911
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-124911-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-124911-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (64.432954ms)

                                                
                                                
-- stdout --
	* [multinode-124911-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-124911-m02' is duplicated with machine name 'multinode-124911-m02' in profile 'multinode-124911'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-124911-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-124911-m03 --driver=kvm2  --container-runtime=crio: (43.211079542s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-124911
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-124911: exit status 80 (203.084585ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-124911
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-124911-m03 already exists in multinode-124911-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-124911-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.49s)

                                                
                                    
x
+
TestScheduledStopUnix (115.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-997589 --memory=2048 --driver=kvm2  --container-runtime=crio
E0914 22:29:29.764442   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:29:39.521158   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-997589 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.191483073s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-997589 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-997589 -n scheduled-stop-997589
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-997589 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-997589 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-997589 -n scheduled-stop-997589
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-997589
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-997589 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-997589
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-997589: exit status 7 (59.287257ms)

                                                
                                                
-- stdout --
	scheduled-stop-997589
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-997589 -n scheduled-stop-997589
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-997589 -n scheduled-stop-997589: exit status 7 (60.155722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-997589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-997589
--- PASS: TestScheduledStopUnix (115.70s)

                                                
                                    
x
+
TestKubernetesUpgrade (199.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-711912 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-711912 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m17.91174573s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-711912
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-711912: (5.201217703s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-711912 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-711912 status --format={{.Host}}: exit status 7 (62.231086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-711912 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-711912 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m25.755705212s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-711912 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-711912 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-711912 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (90.396121ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-711912] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-711912
	    minikube start -p kubernetes-upgrade-711912 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7119122 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-711912 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-711912 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0914 22:36:36.475257   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-711912 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (28.774758118s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-711912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-711912
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-711912: (1.41457223s)
--- PASS: TestKubernetesUpgrade (199.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982498 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-982498 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (64.776233ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-982498] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestPause/serial/Start (110.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-354420 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-354420 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m50.227248435s)
--- PASS: TestPause/serial/Start (110.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (105.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982498 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982498 --driver=kvm2  --container-runtime=crio: (1m45.525924709s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-982498 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (105.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982498 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982498 --no-kubernetes --driver=kvm2  --container-runtime=crio: (4.493015815s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-982498 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-982498 status -o json: exit status 2 (223.228811ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-982498","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-982498
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-982498: (1.04714979s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982498 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0914 22:33:32.188371   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982498 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.152147701s)
--- PASS: TestNoKubernetes/serial/Start (31.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-982498 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-982498 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.89664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (4.04566126s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-982498
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-982498: (1.208660612s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982498 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982498 --driver=kvm2  --container-runtime=crio: (42.016731691s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-104104 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-104104 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (93.623445ms)

                                                
                                                
-- stdout --
	* [false-104104] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:34:19.584177   39492 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:34:19.584390   39492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:34:19.584398   39492 out.go:309] Setting ErrFile to fd 2...
	I0914 22:34:19.584402   39492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:34:19.584580   39492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-6287/.minikube/bin
	I0914 22:34:19.585095   39492 out.go:303] Setting JSON to false
	I0914 22:34:19.585944   39492 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4602,"bootTime":1694726258,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 22:34:19.586002   39492 start.go:138] virtualization: kvm guest
	I0914 22:34:19.588275   39492 out.go:177] * [false-104104] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 22:34:19.589686   39492 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:34:19.589682   39492 notify.go:220] Checking for updates...
	I0914 22:34:19.591113   39492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:34:19.592552   39492 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-6287/kubeconfig
	I0914 22:34:19.593868   39492 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-6287/.minikube
	I0914 22:34:19.595291   39492 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 22:34:19.596648   39492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:34:19.598400   39492 config.go:182] Loaded profile config "NoKubernetes-982498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0914 22:34:19.598492   39492 config.go:182] Loaded profile config "force-systemd-env-248976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:34:19.598560   39492 config.go:182] Loaded profile config "kubernetes-upgrade-711912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0914 22:34:19.598653   39492 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:34:19.632859   39492 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 22:34:19.634143   39492 start.go:298] selected driver: kvm2
	I0914 22:34:19.634155   39492 start.go:902] validating driver "kvm2" against <nil>
	I0914 22:34:19.634164   39492 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:34:19.635964   39492 out.go:177] 
	W0914 22:34:19.637574   39492 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0914 22:34:19.638941   39492 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-104104 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-104104" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-104104

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-104104"

                                                
                                                
----------------------- debugLogs end: false-104104 [took: 2.859835419s] --------------------------------
helpers_test.go:175: Cleaning up "false-104104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-104104
--- PASS: TestNetworkPlugins/group/false (3.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-982498 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-982498 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.505698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (341.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-930717 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-930717 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (5m41.395392556s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (341.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (104.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-344363 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-344363 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m44.241202513s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (104.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-344363 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [608ce466-af8d-4d2f-b38f-dabc477f308b] Pending
helpers_test.go:344: "busybox" [608ce466-af8d-4d2f-b38f-dabc477f308b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [608ce466-af8d-4d2f-b38f-dabc477f308b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.02053813s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-344363 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-344363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-344363 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.01570362s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-344363 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-799144 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
E0914 22:39:12.810998   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 22:39:29.764338   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-799144 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (59.454090375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-799144 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [012aa3b5-77e6-4f18-a715-0b2b77e4caf8] Pending
helpers_test.go:344: "busybox" [012aa3b5-77e6-4f18-a715-0b2b77e4caf8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [012aa3b5-77e6-4f18-a715-0b2b77e4caf8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.027793844s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-799144 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-799144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-799144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025665728s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-799144 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-948459
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-588699 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-588699 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m0.307003023s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (662.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-344363 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-344363 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (11m2.498641064s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-344363 -n no-preload-344363
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (662.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-588699 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0e733715-08db-4788-932b-728540c3f2eb] Pending
helpers_test.go:344: "busybox" [0e733715-08db-4788-932b-728540c3f2eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 22:41:36.475121   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
helpers_test.go:344: "busybox" [0e733715-08db-4788-932b-728540c3f2eb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.029730514s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-588699 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-588699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-588699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.002310754s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-588699 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-930717 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [77b1c6dd-dc39-47b9-9583-3a038aa7560a] Pending
helpers_test.go:344: "busybox" [77b1c6dd-dc39-47b9-9583-3a038aa7560a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [77b1c6dd-dc39-47b9-9583-3a038aa7560a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.033251507s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-930717 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-930717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-930717 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (508.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-799144 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-799144 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (8m28.105994095s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799144 -n default-k8s-diff-port-799144
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (508.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (482.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-588699 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
E0914 22:44:29.764428   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-588699 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (8m2.419255729s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-588699 -n embed-certs-588699
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (482.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (536.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-930717 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0914 22:46:19.522277   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 22:46:36.475188   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
E0914 22:48:32.188303   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 22:49:29.764374   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-930717 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (8m56.004139566s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930717 -n old-k8s-version-930717
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (536.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-395546 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-395546 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m0.463175353s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0914 23:07:37.126454   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:37.131689   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:37.141952   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:37.162251   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:37.202578   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:37.282902   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:37.443296   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:37.764431   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:38.405391   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:39.686101   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:42.246309   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:47.367168   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:07:57.607932   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m26.699522015s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-395546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-395546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.641572163s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-395546 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-395546 --alsologtostderr -v=3: (3.095627145s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-395546 -n newest-cni-395546
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-395546 -n newest-cni-395546: exit status 7 (64.433662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-395546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-395546 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
E0914 23:08:18.088338   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-395546 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (50.983264691s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-395546 -n newest-cni-395546
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-104104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-104104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gzbwg" [164d5d84-7408-43a1-b32c-7287b86d39d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 23:08:32.188995   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-gzbwg" [164d5d84-7408-43a1-b32c-7287b86d39d2] Running
E0914 23:08:41.521831   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:08:41.527130   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:08:41.537446   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:08:41.557717   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:08:41.598018   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:08:41.678162   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:08:41.838477   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:08:42.159654   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:08:42.800509   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:08:44.081013   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.013149067s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (21.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-104104 exec deployment/netcat -- nslookup kubernetes.default
E0914 23:08:46.642042   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:08:51.763045   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-104104 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.18463377s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-104104 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context auto-104104 exec deployment/netcat -- nslookup kubernetes.default: (5.198569107s)
--- PASS: TestNetworkPlugins/group/auto/DNS (21.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-395546 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-395546 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-395546 -n newest-cni-395546
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-395546 -n newest-cni-395546: exit status 2 (245.855696ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-395546 -n newest-cni-395546
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-395546 -n newest-cni-395546: exit status 2 (240.27112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-395546 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-395546 -n newest-cni-395546
E0914 23:08:59.049492   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-395546 -n newest-cni-395546
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0914 23:09:02.004034   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m11.938409583s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (115.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m55.553107341s)
--- PASS: TestNetworkPlugins/group/calico/Start (115.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (123.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0914 23:09:22.484760   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:09:29.765473   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
E0914 23:10:01.993621   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:01.998977   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:02.009240   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:02.029510   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:02.069942   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:02.150281   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:02.310770   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:02.631959   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:03.272714   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:03.444980   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
E0914 23:10:04.553610   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:07.113976   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
E0914 23:10:12.234803   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m3.099671169s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (123.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-49m7n" [043dc0a8-431b-4569-9fea-a034ddaba4fd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.020412441s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-104104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-104104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-99n7g" [9429fe3f-4a61-42df-96c5-e2e8ddeea9ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 23:10:20.969920   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
E0914 23:10:22.475051   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-99n7g" [9429fe3f-4a61-42df-96c5-e2e8ddeea9ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.011245694s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-588699 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-588699 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-588699 --alsologtostderr -v=1: (1.130416594s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-588699 -n embed-certs-588699
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-588699 -n embed-certs-588699: exit status 2 (248.771305ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-588699 -n embed-certs-588699
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-588699 -n embed-certs-588699: exit status 2 (253.792903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-588699 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-588699 -n embed-certs-588699
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-588699 -n embed-certs-588699
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-104104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (108.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0914 23:10:42.955905   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m48.025410435s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (108.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (106.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m46.809677948s)
--- PASS: TestNetworkPlugins/group/flannel/Start (106.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lgcxc" [c75c70a1-1e8b-428a-802d-010a1d93d294] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.026732341s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-104104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-104104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-104104 replace --force -f testdata/netcat-deployment.yaml: (1.442820119s)
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h9ln8" [fe5d735e-817d-446d-a8ec-6ffe9cc85492] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h9ln8" [fe5d735e-817d-446d-a8ec-6ffe9cc85492] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.015734116s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-104104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-104104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-104104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qmrhr" [5f3a774f-1cf3-46d1-a0ba-91cdc85de7c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 23:11:25.365982   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/no-preload-344363/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-qmrhr" [5f3a774f-1cf3-46d1-a0ba-91cdc85de7c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.013657263s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-104104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0914 23:11:36.474600   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/ingress-addon-legacy-235631/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (103.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-104104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m43.310841323s)
--- PASS: TestNetworkPlugins/group/bridge/Start (103.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-104104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-104104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xchmr" [9a3e3cd6-167b-4476-8fd6-d64f93faf3cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xchmr" [9a3e3cd6-167b-4476-8fd6-d64f93faf3cd] Running
E0914 23:12:32.812589   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/addons-452179/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.019534254s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-104104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-88s7h" [43123f09-c466-49f0-a806-f1ba8beaea35] Running
E0914 23:12:37.125943   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/old-k8s-version-930717/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019344664s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-104104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-104104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2vn6b" [dcbb0ca4-afe3-4316-93f3-5c7abc6ba75a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 23:12:45.837850   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/default-k8s-diff-port-799144/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2vn6b" [dcbb0ca4-afe3-4316-93f3-5c7abc6ba75a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.015621196s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-104104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-104104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-104104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jpnxc" [8d8dbc0f-de10-49a5-9f71-27fad6b67617] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 23:13:29.167836   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
E0914 23:13:29.173065   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
E0914 23:13:29.184196   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
E0914 23:13:29.204476   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
E0914 23:13:29.244782   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
E0914 23:13:29.325074   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
E0914 23:13:29.485464   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
E0914 23:13:29.806169   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-jpnxc" [8d8dbc0f-de10-49a5-9f71-27fad6b67617] Running
E0914 23:13:30.447269   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
E0914 23:13:31.727924   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
E0914 23:13:32.189140   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/functional-337253/client.crt: no such file or directory
E0914 23:13:34.288936   13485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-6287/.minikube/profiles/auto-104104/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.009473439s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-104104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-104104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (36/290)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.1/cached-images 0
13 TestDownloadOnly/v1.28.1/binaries 0
14 TestDownloadOnly/v1.28.1/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
146 TestImageBuild 0
179 TestKicCustomNetwork 0
180 TestKicExistingNetwork 0
181 TestKicCustomSubnet 0
182 TestKicStaticIP 0
213 TestChangeNoneUser 0
216 TestScheduledStopWindows 0
218 TestSkaffold 0
220 TestInsufficientStorage 0
224 TestMissingContainerUpgrade 0
232 TestStartStop/group/disable-driver-mounts 0.15
246 TestNetworkPlugins/group/kubenet 5.54
254 TestNetworkPlugins/group/cilium 3.35
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-561154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-561154
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-104104 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-104104" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-104104

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-104104"

                                                
                                                
----------------------- debugLogs end: kubenet-104104 [took: 5.416783535s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-104104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-104104
--- SKIP: TestNetworkPlugins/group/kubenet (5.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-104104 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-104104" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-104104

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-104104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-104104"

                                                
                                                
----------------------- debugLogs end: cilium-104104 [took: 3.202628764s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-104104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-104104
--- SKIP: TestNetworkPlugins/group/cilium (3.35s)

                                                
                                    
Copied to clipboard